FaceDirector: Continuous Control of Facial Performance in Video

 

We present a method to continuously blend between multiple facial performances of an actor, which can contain different facial expressions or emotional states.

December 11, 2015
International Conference on Computer Vision (ICCV) 2015

 

Authors

Charles Malleson (Centre for Vision, Speech and Signal Processing, University of Surrey, UK)

Jean-Charles Bazin (Disney Research)

Oliver Wang (Disney Research)

Derek Bradley (Disney Research)

Thabo Beeler (Disney Research)

Adrian Hilton (Centre for Vision, Speech and Signal Processing, University of Surrey, UK)

Alexander Sorkine-Hornung (Disney Research)

FaceDirector: Continuous Control of Facial Performance in Video

Abstract

As an example, given sad and angry video takes of a scene, our method empowers a movie director to specify arbitrary weighted combinations and smooth transitions between the two takes in post-production. Our contributions include (1) a robust nonlinear audio-visual synchronization technique that exploits complementary properties of audio and visual cues to automatically determine robust, dense spatio-temporal correspondences between takes, and (2) a seamless facial blending approach that provides the director full control to interpolate timing, facial expression, and local appearance, in order to generate novel performances after filming. In contrast to most previous works, our approach operates entirely in image space, avoiding the need of 3D facial reconstruction. We demonstrate that our method can synthesize visually believable performances with applications in emotion transition, performance correction, and timing control.

Copyright Notice