Improved Lighting Models for Facial Appearance Capture

 

We compare the results obtained with a state-of-the-art appearance capture method [RGB∗20], with and without our proposed improvements to the lighting model.

April 25, 2022
Eurographics 2022

 

Authors

Yingyan Xu (DRZ/ETH Joint M.Sc.)

Jérémy Riviere (DisneyResearch|Studios)

Gaspard Zoss (DisneyResearch|Studios/ETH Joint PhD)

Prashanth Chandran (DisneyResearch|Studios/ETH Joint PhD)

Derek Bradley (DisneyResearch|Studios)

Paulo Gotardo (DisneyResearch|Studios)

 

Improved Lighting Models for Facial Appearance Capture

Abstract

Facial appearance capture techniques estimate geometry and reflectance properties of facial skin by performing a computa- tionally intensive inverse rendering optimization in which one or more images are re-rendered a large number of times and compared to real images coming from multiple cameras. Due to the high computational burden, these techniques often make several simplifying assumptions to tame complexity and make the problem more tractable. For example, it is common to as- sume that the scene consists of only distant light sources, and ignore indirect bounces of light (on the surface and within the surface). Also, methods based on polarized lighting often simplify the light interaction with the surface and assume perfect separation of diffuse and specular reflectance. In this paper, we move in the opposite direction and demonstrate the impact on facial appearance capture quality when departing from these idealized conditions towards models that seek to more accurately represent the lighting, while at the same time minimally increasing computational burden. We compare the results obtained with a state-of-the-art appearance capture method [RGB∗20], with and without our proposed improvements to the lighting model.

Copyright Notice