Rendering with Style: Combining Traditional and Neural Approaches for High-Quality Face Rendering

 

We propose to combine incomplete, high-quality renderings showing only facial skin with recent methods for neural rendering of faces, in order to automatically and seamlessly create photo-realistic full-head portrait renders from captured data without the need for artist intervention.

November 30, 2021
ACM SIGGRAPH Asia 2021

 

Authors

Prashanth Chandran (DisneyResearch|Studios/ETH Joint PhD)

Sebastian Winberg (DisneyResearch|Studios)

Gaspard Zoss (DisneyResearch|Studios/ETH Joint PhD)

Jérémy Riviere (DisneyResearch|Studios)

Markus Gross (DisneyResearch|Studios/ETH Zurich)

Paulo Gotardo (DisneyResearch|Studios)

Derek Bradley (DisneyResearch|Studios)

 

Rendering with Style: Combining Traditional and Neural Approaches for High-Quality Face Rendering

Abstract

In this work we propose to combine incomplete, high-quality renderings showing only facial skin with recent methods for neural rendering of faces, in order to automatically and seamlessly create photo-realistic full-head portrait renders from captured data without the need for artist intervention. Our method begins with traditional face rendering, where the skin is rendered with the desired appearance, expression, viewpoint, and illumination. These skin renders are then projected into the latent space of a pre-trained neural network that can generate arbitrary photo-real face images (StyleGAN2). The result is a sequence of realistic face images that match the identity and appearance of the 3D character at the skin level, but is completed naturally with synthesized hair, eyes, inner mouth and surroundings.

Copyright Notice