Lossy Image Compression with Foundation Diffusion Models
In this work, we formulate the removal of quantization error as a denoising task, using diffusion to recover lost information in the transmitted image latent.
September 28, 2024
European Conference on Computer Vision (ECCV) (2024)
Authors
Lucas Relic (DisneyResearch|Studios/ETH Zurich)
Roberto Azevedo (DisneyResearch|Studios)
Markus Gross (DisneyResearch|Studios/ETH Zurich)
Christopher Schroers (DisneyResearch|Studios)
Lossy Image Compression with Foundation Diffusion Models
Incorporating diffusion models in the image compression domain has the potential to produce realistic and detailed reconstructions, especially at extremely low bitrates. Previous methods focus on using diffusion models as expressive decoders robust to quantization errors in the conditioning signals. However, achieving competitive results in this manner requires costly training of the diffusion model and long inference times due to the iterative generative process. In this work we formulate the removal of quantization error as a denoising task, using diffusion to recover lost information in the transmitted image latent. Our approach allows us to perform less than 10% of the full diffusion generative process and requires no architectural changes to the diffusion model, enabling the use of foundation models as a strong prior without additional fine tuning of the backbone. Our proposed codec outperforms previous methods in quantitative realism metrics, and we verify that our reconstructions are qualitatively preferred by end users, even when other methods use twice the bitrate.