MoRF: Morphable Radiance Fields for Multiview Neural Head Modeling
We demonstrate how MoRF is a strong new step towards 3D morphable neural head modeling.
July 24, 2022
ACM SIGGRAPH 2022
Authors
Daoye Wang (ETH Zürich)
Prashanth Chandran (DisneyResearch|Studios/ETH Joint PhD)
Gaspard Zoss (DisneyResearch|Studios/ETH Joint PhD)
Derek Bradley (DisneyResearch|Studios)
Paulo Gotardo (DisneyResearch|Studios)
MoRF: Morphable Radiance Fields for Multiview Neural Head Modeling
Recent research work has developed powerful generative models (eg, StyleGAN2) that can synthesize complete human head images with impressive photorealism, enabling applications such as photorealistically editing real photographs. While these models can be trained on large collections of unposed images, their lack of explicit 3D knowledge makes it difficult to achieve even basic control over 3D viewpoint without unintentionally altering identity. On the other hand, recent Neural Radiance Field (NeRF) methods have already achieved multiview-consistent, photorealistic renderings but they are so far limited to a single facial identity. In this paper, we propose a new Morphable Radiance Field (MoRF) method that extends a NeRF into a generative neural model that can realistically synthesize multiview-consistent images of complete human heads, with variable and controllable identity. MoRF allows for morphing between particular identities and synthesizing arbitrary new identities, all while providing realistic and consistent rendering under novel viewpoints. We train MoRF in a simple supervised fashion by leveraging a high-quality database of multiview portrait images of several people, captured in studio with polarization-based separation of diffuse and specular reflection. Here, we demonstrate how MoRF is a strong new step towards 3D morphable neural head modeling.