Panoramic Video from Unstructured Camera Arrays

 

In this paper we extend the basic concept of local warping for parallax removal.

May 2, 2015
Eurographics 2015

 

Authors

Federico Perazzi (Disney Research/ETH Zürich)

Alexander Sorkine-Hornung (Disney Research)

Henning Zimmer (Disney Research)

Peter Kaufmann (Disney Research)

Oliver Wang (Disney Research)

Scott Watson (Walt Disney Imagineering)

Markus Gross (Disney Research/ETH Zürich)

Panoramic Video from Unstructured Camera Arrays

Abstract

We describe an algorithm for generating panoramic video from unstructured camera arrays. Artifact-free panorama stitching is impeded by parallax between input views. Common strategies such as multi-level blending or minimum energy seams produce seamless results on quasi-static input. However, on video input these approaches introduce noticeable visual artifacts due to lack of global temporal and spatial coherence. In this paper we extend the basic concept of local warping for parallax removal. Firstly, we introduce an error measure with increased sensitivity to stitching artifacts in regions with pronounced structure. Using this measure, our method efficiently finds an optimal ordering of pair-wise warps for robust stitching with minimal parallax artifacts. Weighted extrapolation of warps in non-overlap regions ensures temporal stability, while at the same time avoiding visual discontinuities around transitions between views. Remaining global deformation introduced by the warps is spread over the entire panorama domain using constrained relaxation, while staying as close as possible to the original input views. In combination, these contributions form the first system for spatiotemporally stable panoramic video stitching from unstructured camera array input. We provide comparisons to alternative stitching approaches, and evaluate our method on several panoramic videos for different camera types and array layouts, with resolutions up to over a hundred megapixels.

Copyright Notice