Synth2Track Editor for Efficient Match-Animation
In this work, we developed new tools that significantly reduces the time required for complex shots, combining automation with human expertise to overcome the limitations of current markerless motion capture systems.
Authors
Jakob Buhmann (DisneyResearch|Studios)
Douglas L. Moore (Industrial Light & Magic)
Dominik Borer (DisneyResearch|Studios)
Martin Guay (DisneyResearch|Studios)

Synth2Track Editor for Efficient Match-Animation
A critical step in VFX production is capturing the movement of actors to integrate 3D digital assets into live-action footage. In recent years, advances in regression-based computer vision models such as human detection and motion models have enabled new workflows to emerge where parts of the Match-Animation process are automated. However, challenging shots that contain ambiguous visual cues, strong occlusions, or challenging appearances can cause automated systems to fail and users must revert to manual specification or to the previous generation of semi-automatic tools based on local feature-based tracking [Bregler et al. 2009; Sullivan et al. 2006]. Our key insight is that regression models can be used not only at the beginning of the process, but throughout by using manually specified cues. For example, given a partially detected actor, the user can specify a few landmarks manually, which once re-injected into a model, will yield new detections for the rest of the body. Based on this insight, we developed new tools that significantly reduces the time required for complex shots, combining automation with human expertise to overcome the limitations of current markerless motion capture systems.
