Kernel-Based Frame Interpolation for Spatio-Temporally
We propose a frame interpolation method for rendered content with two key features. First, a kernel based frame synthesis model which predicts the interpolated frame as a linear mapping of the input images.
Karlis Martins Briedis (DisneyResearch|Studios / ETH Zürich)
Abdelaziz Djelouah (DisneyResearch|Studios)
Raphaël Ortiz (DisneyResearch|Studios)
Mark Meyer (Pixar Animation Studios)
Markus Gross (DisneyResearch|Studios / ETH Zürich)
Christopher Schroers (DisneyResearch|Studios)
Recently, there has been exciting progress in frame interpolation for rendered content. In this offline rendering setting, additional
inputs, such as albedo and depth, can be extracted from a scene at a very low cost and, when integrated in a suitable fashion, can
significantly improve the quality of the interpolated frames. Although existing approaches have been able to show good results, most high-quality interpolation methods use a synthesis network for direct color prediction. In complex scenarios, this can result in unpredictable behavior and lead to color artifacts. To mitigate this and to increase robustness, we propose to estimate the interpolated frame by predicting spatially varying kernels that operate on image splats. Kernel prediction ensures a linear mapping from the input images to the output and enables new opportunities, such as consistent and efficient interpolation of alpha values or many other additional channels and render passes that might exist.