BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

The Dawn Of The Age Of Responsive Media 

Following
POST WRITTEN BY
James Begole
This article is more than 8 years old.

This year's CES announcements were more exciting to me than ever before. Sure, we've already seen dozens of drones, exoskeletons, robots, rollable displays, virtual- and augmented-reality headsets, as well as personalized artificial intelligence for smart phones, smart homes, driver-assisted and autonomous cars (aka autonomobiles) and more over the years. They're all better, faster and cheaper this year, but that's not it. My excitement is not about these particular products that are coming out in 2016 - I care more about what they say about the future. The exciting thing this year is that we're reaching a critical mass of technology adoption for a wealth of new video and audio media experiences - media that responds dynamically to the consumer's attention, engagement and context: Responsive Media. 

Today's media content sits at two extreme ends of a spectrum. At one end are "lean-back experiences" such as movies and television where the consumers are largely passive and are led through a story by content authors/producers. At the other end are "lean-forward" experiences in the form of games in which the user is highly engaged and drives the action through an environment created by content authors/producers. Surely there must be something between these extremes, some form of "interactive media" where the narrative can be driven by authors/producers and also tailored dynamically to the situation and preferences of audience members. Unfortunately, though rich examples have been made, "interactive media" has yet to gain any traction largely for two reasons.  

First, content producers have a story to tell, usually it has a single thread through a beginning, middle and end. Video production is time consuming and generating branches adds time and effort. Second, the way interactive media is usually done today is that when the story reaches a branch point, the audience is asked to pick a branch. Story viewing is largely a "lean back" experience in which the audience has immersed themselves into an imaginary world - asking them to select a branch breaks the immersion and any suspension of disbelief they may have entered. Watching the story is no longer just fun and entertaining, now it's work. 

Today's emerging technologies will change both of those problems. Not by asking the audience to interact directly with the media, as done in "interactive media" in the past, but by creating intelligent media experiences that respond to the audience's engagement, preferences and situation. When I was at Xerox PARC in the 2000s, we called this concept Responsive Media and used it to create a number of media-based shopping experiences. Those prototypes were bulky and time consuming to create but the technologies we are seeing announced recently will break down the two major barriers. 

Reducing the cost of multi-branching content 

First, addressing the added cost of producing branching content, we see two trends. Most important, there are an amazing new cameras this year that generate 360-degree panoramic video such as Nikon's KeyMission 360, and some even in stereo like the Vuze. In contrast to a slew of large and expensive multi-camera rigs announced last year from Google/GoPro, Lytro, Jaunt and Nokia that were aimed at professionals, the newly announced products are more consumer-friendly in size and price, opening a wider market of 360-degree video production. Add to this the growing number of drones, robots and other vehicles that can take cameras to places humans cannot easily reach and we'll see an explosion of new content from fascinating points of view around the globe. 

But how can producers edit and tell a story from the large amounts of multi-view video that these new camera systems so easily capture? In the past, you needed high-end computers and software to manage such a production but now we're seeing web-based editing systems like Interlude's Treehouse that make it easier for producers to specify branch points at which the audience can navigate through a multi-view landscape.  As with the 360-degree cameras, this kind of capability has been available to professional video production studios but it is now brought into the hands of consumers. 

Keeping the audience immersed 

On the consumption side of the media experience, new technologies are eliminating the need for the audience to break out of the story to carefully select their preferred branch. Motion sensors on VR goggles can detect the user's natural turn of the head, allowing them to gaze upon elements of the scene that they choose. In addition, we see sensors like Google Tango that detect the position of a smartphone in a room so that you can move it around like a "viewport" within a dynamic scene. Imagine moving your phone or tablet in front of you to navigate around 3D virtual objects. This means the audience doesn't have to click buttons to see different views but can simply move their head or device naturally. 

There's more, though, Apple recently acquired Emotient, a company that uses advanced computer vision to recognize the emotions of people. In the past, they and other companies like Affectiva used the technologies to help content producers maximize the emotional impact of advertisements and movies, though not in real time. Imagine the next generation of these technologies embedded onto smartphones and VR goggles and robots and autonomobiles so that they not only sense the audience's engagement in real time, but they can also predict disengagement and prevent it by dynamically shifting the content to appeal to an individual's preferences, emotion state and situation. Responsive media will be more like an engaging conversation among humans, rather than just passive consumption. Imagine: 

  • Home robots can detect when a child is frustrated during homework and offer assistance. 
  • Movies on smart TVs, phones and VR headsets can adapt the story to be more dramatic, action or thought-provoking depending on how the individual responds to the scene. 
  • Autonomobile media players can sense the driving conditions in realtime and throttle down the excitement when the driver's attention needs to return to the road. 

... and many more responsive experiences. Ultimately, the future of responsive media is the convergence of lean-back entertainment and lean-forward gaming. Imagine the day when you can watch a computer-generated simulation game between the fantasy football team that you designed and that of your fiercest rival. While the simulation uses the player statistics to dynamically generate the game play, it also reads your emotions and that of your rival, selecting outcomes that maximize the thrill of the experience to keep you both on the edge of your seat!