Blind image super resolution with spatially variant degradations
We show how to extend our approach to spatially variant degradations that typically arise in visual effects pipelines when compositing content from different sources and how to enable both local and global user interaction in the upscaling process.
November 1, 2019
ACM SIGGRAPH Asia 2019
Authors
Victor Cornillère (DisneyResearch|Studios/ETH Joint M.Sc.)
Abdelaziz Djelouah (DisneyResearch|Studios)
Yifan Wang (ETH Zurich)
Olga Sorkine-Hornung (ETH Zurich)
Christopher Schroers (DisneyResearch|Studios)
Blind image super resolution with spatially variant degradations
Existing deep learning approaches to single image super-resolution have achieved impressive results but mostly assume a setting with fixed pairs of high resolution and low resolution images. However, to robustly address realistic upscaling scenarios where the relation between high resolution and low resolution images is unknown, blind image super-resolution is required. To this end, we propose a solution that relies on three components: First, we use a degradation aware SR network to synthesize the HR image given a low resolution image and the corresponding blur kernel. Second, we train a kernel discriminator to analyze the generated high resolution image in order to predict errors present due to providing an incorrect blur kernel to the generator. Finally, we present an optimization procedure that is able to recover both the degradation kernel and the high resolution image by minimizing the error predicted by our kernel discriminator. We also show how to extend our approach to spatially variant degradations that typically arise in visual effects pipelines when compositing content from different sources and how to enable both local and global user interaction in the upscaling process.