Hope: Living & Loving With HIV In Jamaica

In 2007, poet and writer Kwame Dawes made five trips to Jamacia to learn about the impact of HIV and AIDS there. He talked to both people with HIV and people in the medical community fighting the disease. Based on the stories he collected, Dawes wrote 22 poems, which he calls “anthems of hope and alarms of warning.” Hope: Living & Loving With HIV In Jamaica is a multimedia story that combines those poems with video, audio, images, and music.

How It Works

The 22 poems are the focus of Hope: Living & Loving. They are presented both as text and as audio. Dawes himself recites the poems. For nine of the poems, there is also an audio recording of a musical version with the words as lyrics. For four “featured” poems, there is a slideshow version, where each line of poetry is paired with a photo. The slideshow plays in step with Dawes’ recital of the poem and the text appears, line by line, like photo captions. So when the user hears the line “the bodies broken, placid as saints, hobble along the tiled corridors, from room to room,” they see a photo of a man, bent over, walking down the hallway of a hospice.

Most of the poems are linked to profiles of the interview subjects who inspired it. These profiles include one to four short interview clips that deal with particular topics, such as “on accepting those with HIV” and “on sex, sin, and death.”

There are also two longer “infocus” videos that discuss the stigma surrounding HIV in Jamaica and what it is like to live with the disease. These look at the bigger issues involved, providing some context for the poems and the people interviewed

Finally, there is an image gallery of related photos. These photos also appear in the background when other content appears on the screen.

What Works, What Doesn’t

Without judging the quality of the poems themselves, something I am certainly unqualified to do, I can say that the poetry really benefits from the multimedia treatment. It’s an oral form as much as it is a written one. When it is read out loud (by the poet himself no less), the rhythm of the lines come through.

This use of audio doesn’t particularly need a visual accompaniment, but when there is one for the four featured poems, it is generally well done. The images are subtle and don’t compete with the words for the user’s attention. It’s always important for the supporting media form not to compete with the lead form.

The supporting videos that give context to the poetry are all well used. The infocus videos use a variety of footage to talk about two particularly important subjects. The briefer interview clips give the user a personal connection to the various subjects. When someone talks on camera about how the felt when they first found out they were HIV positive, it can’t help but be emotional. And by linking the poems directly to the particular subjects who inspired them, the two forms are connected.


Talking To The Taliban

“Understanding the insurgents is a basic part of reporting on the Afghan war, but it’s a remarkably difficult task,” writes Globe and Mail reporter Graeme Smith in the introduction to Talking To The Taliban. Smith’s solution was to send out a researcher, someone with connections to the insurgency, with a camera to interview Taliban members. The footage the researcher brought back – 42 different interviews – is the basis of Talking To The Taliban.

How It Works

The narrative of the project is broken up into six parts: “Negotiations,” “Forced To Fight,” “The Tribal War,” “Pakistan Relations,” “View of the World,” and “Suicide Bombing.” Each part consists of a short video and a text article on the topic. The videos include footage from the interviews with the Taliban as well as shots of Smith himself speaking to the camera in front of ruins or rows of tanks. These interview clips are mixed with other images from Afghanistan as well as graphics and text quotes from a variety of figures. The text offers a sort of deconstruction of the footage for that given topic, with quotes from others referring both to the given topic and the interviews of the Taliban fighters.

In addition to the videos and images, Talking To The Taliban also includes all of the interviews with members of the Taliban, unedited, and an interactive timeline about the Taliban from 1994 to 2008. Five infographics with accompanying text demonstrate the breakdown of tribes in Afghanistan, the number of air strikes in 2007 compared to 2006, the number of suicide attacks in 2007 compared to 2006, the amount of opium poppy cultivation in the country, and the risk to humanitarian operations in the country.

What Works, What Doesn’t

With such rich source material – the interviews with the Taliban themselves – the videos often have a certain raw power. The video quality isn’t great but the men talk directly to the camera, their faces are covered, a machine gun often on their lap. The same can’t be said when Smith is talking. He isn’t the most comfortable figure on camera and he talks in a clipped manner with too many pauses. Sometimes writers are better off writing.

Talking To The Taliban might have been more effective if text had been used to add context with video clips of the Taliban interspersed. The text that accompanies each section covers much of the same ground as what Smith says anyway. Or Smith could have spoken over images and video as the Globe did with Behind the Veil.

And the graphics and other extras, tacked on at the end of the piece, are easily forgotten about. It’s important to integrate extras like these into the main content or at least provide direct links to them.

NYPL Biblion: World’s Fair

Is this how all archives will look in the future? Produced by the New York Public Library, the Biblion: World’s Fair iPad app presents “the World of Tomorrow,” that is, the 1939-40 World’s Fair Collection in the Manuscripts and Archives Division of the library. That collection includes documents, images, films, audio, and essays from and about the famous forward-looking fair, which the app calls a “living, breathing version of today’s unbounded digital landscape, where you could meet foreign people, see exotic locales, experience the world, and find the future.” Forgiving such hyperbole, Biblion: World’s Fair is indeed an impressive experience.

How It Works

The app begins with a menu of six rather nebulous themes to explore: “A Moment In Time,” “Enter The World Of Tomorrow,” “Beacon of Idealism,” “You Ain’t Seen Nothin’ Yet,” “Fashion, Food and Famous Faces,” and “From The Stacks.” Within each theme are a number of stories, essays, and galleries, each of which pair a certain amount of text, broken into selections, with corresponding photos, documents, images, and the occasional video and audio file interspersed.

The navigation is novel. When the iPad is held on its side, the user can swipe through each page of media, one at a time, from left to right. Turn the iPad for book view, and the same information can be scrolled down through as one large chunk with the option to expand an embedded image or video to full size.

When there is a related story or essay in one of the other themes, the user can follow the “connection” to bring up a blurb about that story and, if they want, read the full story. To browse all the stories, essays, and galleries at once, there is a larger menu with every story listed in which different types of media – audio & video, featured images, document, and connection – are colour coded for easy recognition.

What Works, What Doesn’t

As an historic record, Biblion: World’s Fair is a great resource, touching on a wide variety of topics under the umbrella of the fair and drawing out common historical themes to connect them.

There is a good balance between primary and secondary sources. The essays and stories provide context and understanding while the images and documents help connect the written parts to the time and place. The user doesn’t just read an essay about “Einstein, the Fair, and the Bomb,” they also see photographs of Einstein at the fair and bombshells being moved into the exhibition of the British Pavilion.

The NYPL wisely did not overload the app with either text or archival material. Too much of the first would be far too didactic. Too much of the second would be far too ponderous.

The balance is not as well kept in the mix of media. The vast majority of primary sources are images and documents with only a relative handful of videos and audio clips (although this could be just the source material available). And more often than not, the video and audio content is paired with a short bit of text rather than being integrated into a larger essay.

The navigation works better. When content is organized in a non-linear way as it is done here, the overall experience can easily become fragmented because the user is left to find the common themes on their own. By adding direct connections between essays and stories in different sections that have similar themes, Biblion: Word’s Fair helps the user do this.

This, however, makes much of the content of Biblion: World’s Fair more akin to what you would find in a book than something really innovative.

How Much Is Left?

“If the 20th century was an expansive era seemingly without boundaries – a time of jet plans, space travel and the Internet – the early years of the 21st century have showed us the limits of our small world.” This stark statement begins the introduction of How Much Is Left ?, a Flash-based interactive created for the magazine Scientific American by Zemi, the people behind FLYP. Time, that is, how much time we have before we run out of resources if our consumption levels don’t change, is the focus of How Much Is Left?

How It Works

Befitting the topic, an interactive timeline is at the centre of How Much Is Left? The estimated lifespans of 16 different resources are tracked from 1976 to 2560. One by one, each resource comes to an end. In 1976 the glaciers start to melt. In 2014, we will hit peak oil. In 2072, we will have used up 90% of the coal available to us. In 2560, we run out of Lithium, a key metal used to make batteries.

At the point on the timeline when each resource runs out, there is an icon to click on that brings up the main content: a pop up with text, a graphic (sometimes static, sometimes interactive), and occasionally a short audio track providing further information about that resource.

The timeline is organized by resource type: 4 minerals, 2 fossil fuels, 2 forms of biodiversity, 2 food sources, and 4 water sources. Each type is colour coded and the user can choose to turn off all but one resource type if they wish.

In addition to the timeline, there are also five documentary-style videos, one for each type of resource, about the depletion of that resource, with related footage and graphics intercut. Finally, there is a poll that asks users whether or not technological innovation will solve the problem of shrinking resources (as of this post, 54% of people said no).

What Works, What Doesn’t

The best thing about How Much Is Left? is that it doesn’t try to do too much. The focus is on the timeline. There aren’t additional sections that might bog down the overall experience. If the user wants further information, the videos corral together different bits of context (expert interviews, related footage and further graphics) in one place for easy watching.

Visually, the timeline works great. Using different colours for different resource types makes it easy to follow the different resources as time goes on. And the dwindling number of lines as time passes certainly makes an impression.

It also works well as a way to navigate through the content. When navigation tools are built into to the content itself instead of being presented in a separate menu, it can add to the overall experience. In How Much Is Left?, the user already knows an important piece of information – the year the resource will run out – before they click on the icon to see the pop up. The only problem is that the user has no idea what that specific resource is until they click on the icon. The only identification given on the timeline itself is the colour indicating what category of resource it is.

However, the pop ups themselves are not so well done.  The information varies from overly general, dull text (“Indium is a silvery meal that sits next to platinum on the periodic table and shares many of its properties such as its color and density”) to dense graphs that are not well explained.

It sometimes takes a minute to understand how the information corresponds to the date the resources runs out on. For example, in 2025, some countries will have slightly less water available per capita per year. Why the year 2025 is particularly significant is never explained.

And why audio is used in some pop ups instead of text seems to have been done only to limit the amount of reading the user would have to do. The information presented in each form is pretty much interchangeable in tone and substance.

A Matter Of Life & Death

A Matter of Life & Death is one of 350 stories published by the online multimedia magazine FLYP between 2007 and 2009. At that time, FLYP was an innovative take on the basic magazine idea. Built with Flash, it combined text, video, audio, animation, and interactivity to create “a new kind of storytelling.” FLYP looked like a magazine – the user flipped from one page to the next (hence the name FLYP) and the focus was still on text. But along with the text there was also a variety of different media – animated introductions, embedded video and audio clips, interactive infographics, etc.

A Matter of Life & Death is a typical example of a FLYP story. It covers a topic that wouldn’t be out of place in any regular magazine: end of life health care. Indeed, the text portion reads like it was published in a magazine. But along with the text there are also video and audio clips, text pop ups with extra information, and links to outside sources.

How It Works

A Matter Of Life & Death begins with a short video intro that sets the tone of the story. “Over a million terminally ill Americans will choose hospice care this year. Insurance companies make billions by denying procedures to patients in need. Still, 54 percent of people believe that reform will lead to rationed care,” read a series of statements between video clips.

On the pages that follow, a complete text story is augmented by multimedia extras. There are “vital statistics” icons that the user has to click on to read. There are video interviews with patients, their loved ones, and doctors. There are roll-over text pop ups that give definitions to key medical terms like “Aggressive Treatment, “Advance Directive,” and “Living will.” There are extra “read more” pages with further information such as the differences between palliative and hospice care. There are links to outside resources for still more reading. There are audio clips of a number of experts speaking about the different dimensions of end of life care.

All of this media is organized like a traditional magazine – one page at a time. On every page there are columns of text surrounded by a mix of different media and hyperlinks elsewhere.

What Works, What Doesn’t

Non-fiction articles in magazines often follow certain style conventions. They begin with a more descriptive scene that illustrates a larger theme and draws the reader in. A mix of descriptive and more informative paragraphs follow to fill out the story and provide greater context. Characters are described and direct quotes given. It’s a flexible style that can be easily adapted for any number of subjects.

A Matter of Life & Death follows this format. The text beings with the case of Maria Cerro, who was admitted to hospital with liver failure. It goes on to describe health care reform proposals under consideration in the U.S. Congress and common misconceptions about “death panels.” There are quotes from a doctor and a Jesuit priest. This is all well and good, as far as it goes. The question is, how do you integrate the media?

The one quibble I have with the overall design of FLYP is in the media integration. Because the text stories are written as complete narratives, the multimedia that surrounds the stories often feels more like extras than integral parts of the story. The reader can happily flip from one page to the next and completely ignore the videos, pop ups, audio, and infographics. You could, of course, consider this to be an advantage rather than a flaw – the multimedia doesn’t get in the way if you’re not interested in it. But it also leaves FLYP with one foot firmly planted in the print tradition.

Ian Fisher: American Soldier

Ian Fisher: American Soldier tracks one soldier in the U.S. Army, from basic training to Iraq and back again. For 27 months, Fisher and those close to him were followed by reporters and a photographer from the Denver Post. They covered his graduation from high school, recruitment, induction, training, deployment, and his return from combat.

Ian Fisher: American Solder started as a feature story in the newspaper and was later given the multimedia treatment online.

How It Works

The multimedia shell for the story is divided into four sections according to the type of media: photos, videos, text, and extras.

Within photo and video sections are subsections that follow Fisher’s journey chronologically. There are eight photo slideshow chapters, from “Signing Up” to “Coming Home.” Each photo has a caption that tells the user what they are looking at. The ten videos span the same time frame and feature interviews with Fisher, friends, family, and fellow soldiers.

The story section recreates the original newspaper article, word for word, as a 63-page digital book that users can flip through page by page with a few photographs peppered throughout. The user can also read the story as a scrollable text page on the newspaper’s website.

The extras section includes more video content, lists of Army acronyms and rankings, and an interactive map of U.S. bases in Iraq.

A menu at the bottom of the screen allows the user to navigate between sections.

What Works, What Doesn’t

Ian Fisher: American Soldier isn’t trying to refashion the original newspaper story into a multimedia story. It’s just piling the multimedia content on top. The user has to read the text story first, then look through the photos, then watch the videos, then look at the extras. The different forms of media aren’t integrated together in any impactful way.

That said, much of the multimedia is quite compelling. The photos capture personal moments between different characters while the accompanying captions fill in the context. Browsing from one photo to the next tells Ian Fisher’s story almost as well as the text story does.

The videos are a mix of interview clips and photos. There isn’t any commentary, but they still tell a story. And as I’ve noted elsewhere, commentary from journalists can sometimes hurt the impact of a video. It’s always better to show than tell.

The extras, consigned as they are to their own section, don’t add that much to the overall experience. Should the user want to dig deeper, they’re there, but generally speaking, extras need to be directly linked to from the story itself (as was done in Lifted for example) for them to have any kind of impact or real utility.

Love Letters To The Future

How do you make user-generated content into a compelling overall experience? First, you don’t ask for that much from users. Second, you build a fool-proof template. Love Letters To The Future does both of these things. The site is a collection of letters created by users to the people of 2109 about the threat of climate change.

How It Works

The letters are either short texts (140 characters or less), YouTube videos, or uploaded images with captions. Users can address one of three topics: “To my Great-grandchildren”, “On Arctic Ice”, or “World Underwater”.

All of the previously submitted letters are available for browsing by date or popularity and users can share or vote for their favourites. The top 100 letters, according to that vote, were sealed in a time capsule, to be opened 100 years from now.

Along with the user-generated letters, Love Letters To The Future also includes a number of featured “first-hand accounts” of climate change from individuals living around the world. Some of these are videos and some are images with captions. There are also videos of celebrities (including, of course, David Suzuki) introducing the topics.

To add a more cinematic element, there are also a number of planted letters ostensibly sent from the future by a women named Maya. Those letters show a planet in decay after nothing was done to stop climate change. Each letter comes with a puzzle for the user to solve in order to find Maya’s next message, giving Love Letters To The Future a game element.

What Works, What Doesn’t

When you rely on users for content, what you get is always going to be a mixed bag. Some of the letters here are truly heartfelt – a message to a still unborn child coupled with an sonogram image for example. Many are dull platitudes – letters that read “Let’s do it!” or “Stop using oil.” Some are just gibberish.

Fortunately, the template does help keep things manageable. Because the amount of content that could be uploaded was limited, the user can easily browse through the letters to find the good ones. And allowing the user to browse by the number of votes a letter has received weeds out most of the less interesting messages.

The advantage of all the user-generated content is the sheer variety of their origin. There are letters here from all around the world, written in different languages (there’s a handy built-in Google translator). There are pictures of people on top of mountains, relaxing on a beach, and tobogganing down a snowy hill. Yes, the sentiment being expressed is pretty much the same in every letter, but the fact that so many people from so many different places all share that same sentiment is at least half the point.