Denoising with Kernel Prediction and Asymmetric Loss Functions

 

We present a modular convolutional architecture for denoising rendered images.

July 30, 2018
ACM SIGGRAPH 2018

 

Authors

Thijs Vogels (Disney Research)

Fabrice Rousselle (Disney Research)

Brian McWilliams (Disney Research)

Gerhard Röthlin (Disney Research)

Alex Harvill (Pixar Animation Studios)

David Adler (Walt Disney Animation Studios)

Mark Meyer (Pixar Animation Studios)

Jan Novak (Disney Research)

Denoising with Kernel Prediction and Asymmetric Loss Functions

Abstract

We expand on the capabilities of kernel-predicting networks by combining them with a number of task-specific modules, and optimizing the assembly using an asymmetric loss. The source-aware encoder—the first module in the assembly—extracts low-level features and embeds them into a common feature space, enabling quick adaptation of a trained network to novel data. The spatial and temporal modules extract abstract, high-level features for kernel-based reconstruction, which is performed at three different spatial scales to reduce low-frequency artifacts. The complete network is trained using a class of asymmetric loss functions that are designed to preserve details and provide the user with a direct control over the variancebias trade-off during inference. We also propose an error-predicting module for inferring reconstruction error maps that can be used to drive adaptive sampling. Finally, we present a theoretical analysis of convergence rates of kernel-predicting architectures, shedding light on why kernel prediction performs better than synthesizing the colors directly, complementing the empirical evidence presented in this and previous works. We demonstrate that our networks attain results that compare favorably to state-of-the-art methods in terms of detail preservation, low-frequency noise removal, and temporal stability on a variety of production and academic datasets.

Copyright Notice