Artist-Friendly Relightable and Animatable Neural Heads

In this work, we simultaneously tackle both the motion and illumination problem, proposing a new method for relightable and animatable neural heads.

June 3, 2024
CVPR (2024)

 

Authors

Yingyan Xu (DisneyResearch|Studios/ETH Joint PhD)

Prashanth Chandran (DisneyResearch|Studios)

Sebastian Weiss (DisneyResearch|Studios)

Markus Gross (DisneyResearch|Studios/ETH Zurich)

Gaspard Zoss (DisneyResearch|Studios)

Derek Bradley (DisneyResearch|Studios)

Artist-Friendly Relightable and Animatable Neural Heads

Abstract

An increasingly common approach for creating photorealistic digital avatars is through the use of volumetric neural fields. The original neural radiance field (NeRF) allowed for impressive novel view synthesis of static heads when trained on a set of multi-view images, and follow up methods showed that these neural representations can be extended to dynamic avatars. Recently, new variants also surpassed the usual drawback of baked-in illumination in neural representations, showing that static neural avatars can be relit in any environment. In this work we simultaneously tackle both the motion and illumination problem, proposing a new method for relightable and animatable neural heads. Our method builds on a proven dynamic avatar approach based on a mixture of volumetric primitives, combined with a recently proposed lightweight hardware setup for relightable neural fields, and includes a novel architecture that allows relighting dynamic neural avatars performing unseen expressions in any environment, even with nearfield illumination and viewpoints.

Copyright Notice