High-Quality Passive Facial Performance Capture using Anchor Frames

 

We present a new technique for passive and markerless facial performance capture based on anchor frames.

August 7, 2011
ACM SIGGRAPH 2011

 

Authors

Thabo Beeler (Disney Research)

Fabian Hahn (Disney Research/ETH Joint PhD)

Derek Bradley (Disney Research)

Bernd Bickel (Disney Research)

Paul Beardsley (Disney Research)

Craig Gotsman (Technion – Israel Institute of Technology)

Robert W. Sumner (Disney Research)

Markus Gross (Disney Research/ETH Zurich)

High-Quality Passive Facial Performance Capture using Anchor Frames

Abstract

Our method starts with high-resolution per-frame geometry acquisition using state-of-the-art stereo reconstruction and proceeds to establish a single triangle mesh that is propagated through the entire performance. Leveraging the fact that facial performances often contain repetitive subsequences, we identify anchor frames as those which contain similar facial expressions to a manually chosen reference expression. Anchor frames are automatically computed over one or even multiple performances. We introduce a robust image-space tracking method that computes pixel matches directly from the reference frame to all anchor frames, and thereby to the remaining frames in the sequence via sequential matching. This allows us to propagate one reconstructed frame to an entire sequence in parallel, in contrast to previous sequential methods. Our anchored reconstruction approach also limits tracker drift and robustly handles occlusions and motion blur. The parallel tracking and mesh propagation offer low computation times. Our technique will even automatically match anchor frames across different sequences captured on different occasions, propagating a single mesh to all performances.

Copyright Notice