News Release

FaceDirector software generates desired performances in post-production, avoiding reshoots

System from Disney Research and University of Surrey blends an actor's facial performances from multiple takes

Peer-Reviewed Publication

Disney Research

Some film directors are famous for demanding that scenes be shot and re-shot repeatedly until actors express just the right emotion at the right time, but directors will be able to fine-tune performances in post-production, rather than on the film set, with a new system developed by Disney Research and the University of Surrey.

Called FaceDirector, the system enables a director to seamlessly blend facial images from a couple of video takes to achieve the desired effect.

"It's not unheard of for a director to re-shoot a crucial scene dozens of times, even 100 or more times, until satisfied," said Markus Gross, vice president of research at Disney Research. "That not only takes a lot of time - it also can be quite expensive. Now our research team has shown that a director can exert control over an actor's performance after the shoot with just a few takes, saving both time and money."

Jean-Charles Bazin, associate research scientist at Disney Research, and Charles Malleson, a Ph.D. student at the University of Surrey's Centre for Vision, Speech and Signal Processing, showed that FaceDirector is able to create a variety of novel, visually plausible versions of performances of actors in close-up and mid-range shots.

Moreover, the system works with normal 2D video input acquired by standard cameras, without the need for additional hardware or 3D face reconstruction.

The researchers will present their findings at ICCV 2015, the International Conference on Computer Vision, Dec. 11-18, in Santiago, Chile.

"The central challenge for combining an actor's performances from separate takes is video synchronization," Bazin said. "But differences in head pose, emotion, expression intensity, as well as pitch accentuation and even the wording of the speech, are just a few of many difficulties in syncing video takes."

Bazin, Malleson and the rest of the team solved this problem by developing an automatic means of analyzing both facial expressions and audio cues. It then identifies frames that correspond between the takes using a graph-based framework.

"To the best of our knowledge, our work is the first to combine audio and facial features for achieving an optimal nonlinear, temporal alignment of facial performance videos," Malleson said.

Once this synchronization has occurred, the system enables a director to control the performance by choosing the desired facial expressions and timing from either video, which are then blended together using facial landmarks, optical flow and compositing.

To test the system, actors performed several lines of dialog, repeating the performances to convey different emotions - happiness, sadness, excitement, fear, anger, etc. The line readings were captured in HD resolution using standard compact cameras. The researchers were able to synchronize the videos in real-time and automatically on a standard desktop computer. Users could generate novel versions of the performances by interactively blending the video takes.

The researchers showed additional results of FaceDirector for different applications: generation of multiple performances from a sparse set of input video takes in the context of nonlinear video storytelling, script correction and editing, and voice exchange between emotions (for example to create an entertaining performance with a sad voice over a happy face).

###

In addition to Bazin and Malleson, the research team included Oliver Wang, Derek Bradley, Thabo Beeler and Alexander Sorkine-Hornung of Disney Research and Adrian Hilton of the University of Surrey. For more information, visit the project web site at http://www.disneyresearch.com/publication/facedirector/.

About Disney Research

Disney Research is a network of research laboratories supporting The Walt Disney Company. Its purpose is to pursue scientific and technological innovation to advance the company's broad media and entertainment efforts. Vice Presidents Jessica Hodgins and Markus Gross manage Disney Research facilities in Los Angeles, Pittsburgh, Zürich, and Boston and work closely with the Pixar and ILM research groups in the San Francisco Bay Area. Research topics include computer graphics, animation, video processing, computer vision, robotics, wireless & mobile computing, human-computer interaction, displays, behavioral economics, and machine learning.

Website: http://www.disneyresearch.com
Twitter: @DisneyResearch


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.