Neural Frame Interpolation for Rendered Content

 

We propose solutions for leveraging auxiliary features to obtain better motion estimates, more accurate occlusion handling, and to correctly reconstruct non-linear motion between keyframes.

November 30, 2021
ACM SIGGRAPH Asia 2021

 

Authors

Karlis Martins Briedis (DisneyResearch|Studios/ETH Joint PhD)

Abdelaziz Djelouah (DisneyResearch|Studios)

Mark Meyer (Pixar Animation Studios)

Ian McGonigal (Industrial Light & Magic)

Markus Gross (DisneyResearch|Studios/ETH Zurich)

Christopher Schroers (DisneyResearch|Studios)

 

Neural Frame Interpolation for Rendered Content

Abstract

The demand for creating rendered content continues to drastically grow. As it often is extremely computationally expensive and thus costly to render high-quality computer generated images, there is a high incentive to reduce this computational burden. Recent advances in learning-based frame interpolation methods have shown exciting progress but still have not achieved the production-level quality which would be required to render less pixels and achieve savings in rendering times and costs. Therefore, in this paper we propose a method specifically targeted to achieve high quality frame interpolation for rendered content. In this setting, we assume that we have full input every $n$-th frame in addition to auxiliary feature buffers that are cheap to evaluate (e.g. depth, normals, albedo) for every frame. We propose solutions for leveraging such auxiliary features to obtain better motion estimates, more accurate occlusion handling, and to correctly reconstruct non-linear motion between keyframes. With this our method is able to significantly push the state-of-the-art in frame interpolation for rendered content and we are able to obtain production-level quality results.

Copyright Notice