Facial Expression Synthesis using a Global-Local Multilinear Framework
We present a practical method to synthesize plausible 3D facial expressions that preserve the identity of a target subject.
May 25, 2020
Eurographics 2020
Authors
Mengjiao Wang (DisneyResearch|Studios Intern)
Derek Bradley (DisneyResearch|Studios)
Stefanos Zafeiriou (Imperial College London)
Thabo Beeler (DisneyResearch|Studios)
Facial Expression Synthesis using a Global-Local Multilinear Framework
We present a practical method to synthesize plausible 3D facial expressions that preserve the identity of a target subject. The ability to synthesize an entire facial rig from a single neutral expression has a large range of applications both in computer graphics and computer vision, ranging from the efficient and cost-effective creation of CG characters to scalable data generation for machine learning purposes. Unlike previous methods based on multilinear models, the proposed approach is capable to extrapolate well outside the sample pool, which allows it to accurately reproduce the identity of the target subject and create artifact free expression shapes while requiring only a small input dataset. We introduce local-global multilinear models that leverage the strengths of expression-specific and identity-specific local models combined with coarse motion estimations from a global model. Experimental results show that we achieve high-quality, identity-preserving facial expression synthesis results that outperform existing methods both quantitatively and qualitatively.