An Omnistereoscopic Video Pipeline for Capture and Display of Real-World VR

 

In this paper we describe a complete pipeline for the capture and display of real-world Virtual Reality video content, based on the concept of omnistereoscopic panoramas.

August 9, 2018
ACM Transactions on Graphics 2018

 

Authors

Christopher Schroers (Disney Research)

Jean-Charles Bazin (KAIST)

Alexander Sorkine-Hornung (Disney Research)

 

Abstract

We address important practical and theoretical issues that have remained undiscussed in previous works. On the capture side we show how high-quality omnistereo video can be generated from a sparse set of cameras (16 in our prototype array) instead of the hundreds of input views previously required. Despite the sparse number of input views, our approach allows for high quality, real-time virtual head motion, thereby providing an important additional cue for immersive depth perception compared to static stereoscopic video. We also provide an in-depth analysis of the required camera array geometry in order to meet specific stereoscopic output constraints, which is fundamental for achieving a plausible and fully controlled VR viewing experience. Finally, we describe additional insights on how to integrate omnistereo video panoramas with rendered CG content. We provide qualitative comparisons to alternative solutions, including depth-based view synthesis and the Facebook Surround 360 system. In summary, this paper provides a first complete guide and analysis for reimplementing a system for capturing and displaying real-world VR, which we demonstrate on several real-world examples captured with our prototype.

Copyright Notice