Monocular Facial Appearance Capture in the Wild

In this work, we present a new method for reconstructing the appearance properties of human faces from a lightweight capture procedure in an unconstrained environment.

October 16, 2025
International Conference on Computer Vision (ICCV) (2025)

 

Authors

Yingyan Xu(ETH Zurich, DisneyResearch|Studios)

Kate Gadola (ETH Zurich)

Prashanth Chandran (DisneyResearch|Studios)

Sebastian Weiss (DisneyResearch|Studios)

Markus Gross (DisneyResearch|Studios/ETH Zurich)

Gaspard Zoss (DisneyResearch|Studios)

Derek Bradley (DisneyResearch|Studios)

Monocular Facial Appearance Capture in the Wild

Abstract

We present a new method for reconstructing the appearance properties of human faces from a lightweight capture procedure in an unconstrained environment. Our method recovers the surface geometry, diffuse albedo, specular intensity and specular roughness from a monocular video containing a simple head rotation in-the-wild. Notably, we make no simplifying assumptions on the environment lighting, and we explicitly take visibility and occlusions into account. As a result, our method can produce facial appearance maps that approach the fidelity of studio-based multiview captures, but with a far easier and cheaper procedure.

Copyright Notice