User-Guided Lip Correction for Facial Performance Capture

 

We present a novel user-guided approach to correcting these common lip shape errors present in traditional capture systems.

July 11, 2018
ACM SIGGRAPH / Eurographics Symposium on Computer Animation (SCA) 2018

 

Authors

Dimitar Dinev (Disney Research/University of Utah)

Thabo Beeler (Disney Research)

Derek Bradley (Disney Research)

Moritz Baecher (Disney Research)

Hongyi Xu (Disney Research)

Ladislav Kavan (University of Utah)

 

User-Guided Lip Correction for Facial Performance Capture

Abstract

Facial performance capture is the primary method for generating facial animation in video games, feature films, and virtual environments, and recent advances have produced very compelling results. Still, one of the most challenging regions is the mouth, which often contains systematic errors due to the complex appearance and occlusion/dis-occlusion of the lips. We present a novel user-guided approach to correcting these common lip shape errors present in traditional capture systems. Our approach is to allow a user to manually correct a small number of problematic frames, and then our system learns the types of corrections desired and automatically corrects the entire performance. As correcting even a single frame using traditional 3D sculpting tools can be time-consuming and require great skill, we also propose a simple and fast 2D sketch-based method for generating plausible lip corrections for the problematic key frames. We demonstrate our results on captured performances of three different subjects and validate our method with an additional sequence that contains ground truth lip reconstructions.

Copyright Notice