Accurate Markerless Jaw Tracking for Facial Performance Capture
We present the first method to accurately track the invisible jaw based solely on the visible skin surface, without the need for any markers or augmentation of the actor.
July 12, 2019
ACM Siggraph 2019
Authors
Gaspard Zoss (Disney Research/ETH Joint PhD)
Thabo Beeler (Disney Research)
Markus Gross (Disney Research/ETH Zurich)
Derek Bradley (Disney Research)
Accurate Markerless Jaw Tracking for Facial Performance Capture
We present the first method to accurately track the invisible jaw based solely on the visible skin surface, without the need for any markers or augmentation of the actor. As such the method can readily be integrated with off-the-shelf facial performance capture systems. The core idea is to learn a non-linear mapping from the skin deformation to the underlying jaw motion on a dataset where ground-truth jaw poses have been acquired, and then to retarget the mapping to new subjects. Solving for the jaw pose plays a central role in visual effects pipelines, since accurate jaw motion is required when retargeting to fantasy characters and for physical simulation. Currently, this task is performed mostly manually to achieve the desired level of accuracy, and the presented method has the potential to fully automate this labour intense and error prone process.