Spatially-Varying Image Warping: Evaluations and VLSI Implementations

 

In this work, we analyze and compare spatially-varying image warping techniques in terms of quality and computational performance.

December 31, 2013
Springer VLSI-SoC Book Chapter 2013

 

Authors

Pierre Greisen (Disney Research/ETH Joint PhD)

Michael Schaffner (Disney Research/ETH Joint PhD)

Danny Luu (ETH Zurich)

Val Mikos (ETH Zurich)

Simon Heinzle (Disney Research)

Frank Gürkaynak (ETH Zurich)

Aljoscha Smolic (Disney Research)

Spatially-Varying Image Warping: Evaluations and VLSI Implementations

Abstract

Spatially-varying, non-linear image warping has gained growing interest due to the appearance of image domain warping applications such as aspect ratio retargeting or stereo remapping/stereo-to-multiview conversion. In contrast to the more common global image warping, e.g., zoom or rotation, the image transformation is now a spatially-varying mapping that, in principle, enables arbitrary image transformations. A practical constraint is that transformed pixels keep their relative ordering, i.e., there are no fold-overs. In this work, we analyze and compare spatially-varying image warping techniques in terms of quality and computational performance. In particular, aliasing artifacts, interpolation quality (sharpness), number of arithmetical operations, and memory bandwidth requirements are considered. Further, we provide an architecture based on Gaussian filtering and an architecture with bicubic interpolation and compare corresponding VLSI implementations.

Copyright Notice