Neural Denoising for Deep-Z Monte Carlo Renderings

We propose a hybrid reconstruction architecture that combines the depth-resolved reconstruction at each bin with the flattened reconstruction at the pixel level.

April 20, 2024
Eurographics (2024)
 
 

 

Authors

Xianyao Zhang (DisneyResearch|Studios / ETH Zürich) 

Gerhard Röthlin (DisneyResearch|Studios)

Shilin Zhu (Pixar Animation Studios)

Tunç Ozan Aydın (DisneyResearch|Studios)

Farnood Salehi (DisneyResearch|Studios)

Markus Gross (DisneyResearch|Studios / ETH Zürich)

Marios Papas (DisneyResearch|Studios)

 

Neural Denoising for Deep-Z Monte Carlo Renderings

Abstract

We present a kernel-predicting neural denoising method for path-traced deep-Z images that facilitates their usage in animation and visual effects production. Deep-Z images provide enhanced flexibility during compositing as they contain color, opacity, and other rendered data at multiple depth-resolved bins within each pixel. However, they are subject to noise, and rendering until convergence is prohibitively expensive. The current state of the art in deep-Z denoising yields objectionable artifacts, and current neural denoising methods are incapable of handling the variable number of depth bins in deep-Z images. Our method extends kernel-predicting convolutional neural networks to address the challenges stemming from denoising deep-Z images. We propose a hybrid reconstruction architecture that combines the depth-resolved reconstruction at each bin with the flattened reconstruction at the pixel level. Moreover, we propose depth-aware neighbor indexing of the depth-resolved inputs to the convolution and denoising kernel application operators, which reduces artifacts caused by depth misalignment present in deep-Z images. We evaluate our method on a production-quality deep-Z dataset, demonstrating significant improvements in denoising quality and performance compared to the current state-of-the-art deep-Z denoiser. By addressing the significant challenge of the cost associated with rendering path-traced deep-Z images, we believe that our approach will pave the way for broader adoption of deep-Z workflows in future productions

Copyright Notice