Data-driven Extraction and Composition of Secondary Dynamics in Facial Performance Capture

 

Our work aims to compute and characterize the difference between the captured dynamic facial performance, and a speculative quasistatic variant of the same motion should the inertial effects have been absent.

August 17, 2020
ACM Siggraph 2020

 

Authors

Gaspard Zoss (DisneyResearch|Studios/ETH Joint PhD)

Eftychios Sifakis (University of Wisconsin, Madison/DisneyResearch|Studios)

Markus Gross (DisneyResearch|Studios/ETH Zurich)

Thabo Beeler (DisneyResearch|Studios)

Derek Bradley (DisneyResearch|Studios)

 

Data-driven Extraction and Composition of Secondary Dynamics in Facial Performance Capture

Abstract

Performance capture of expressive subjects will inevitably incorporate some fraction of motion that is due to inertial effects and dynamic overshoot due to ballistic motion. Normally these secondary dynamic effects are unwanted, as the captured facial performance is often retargeted to different head motion. This paper advances the hypothesis that for a highly constrained elastic medium such as the human face, these secondary inertial effects are predominantly due to the motion of the underlying bony structures, and present the ability to either subtract parasitic secondary dynamics that resulted from unintentional motion during capture, or compose such effects on top of a quasistatic performance to simulate a new dynamic motion of the actor’s body and skull, either artist-prescribed or acquired via motion capture.

Copyright Notice