Photogeometric Scene Flow for High-Detail Dynamic 3D Reconstruction

 

This paper proposes photogeometric scene flow (PGSF) for high-quality dynamic 3D reconstruction.

December 11, 2015
International Conference on Computer Vision (ICCV) 2015

 

Authors

Paulo F. U. Gotardo (Disney Research)

Tomas Simon (Carnegie Mellon University)

Yaser Sheikh (Carnegie Mellon University)

Iain Matthews (Disney Research/Carnegie Mellon University)

 

Photogeometric Scene Flow for High-Detail Dynamic 3D Reconstruction

Abstract

Photometric stereo (PS) is an established technique for high-detail reconstruction of 3D geometry and appearance. To correct for surface integration errors, PS is often combined with multiview stereo (MVS). With dynamic objects, PS reconstruction also faces the problem of computing optical flow (OF) for image alignment under rapid changes in illumination. Current PS methods typically compute optical flow and MVS as independent stages, each one with its own limitations and errors introduced by early regularization. In contrast, scene flow methods estimate geometry and motion, but lack the fine detail from PS. This paper proposes photogeometric scene flow (PGSF) for high-quality dynamic 3D reconstruction. PGSF performs PS, OF, and MVS simultaneously. It is based on two key observations: (i) while image alignment improves PS, PS allows for surfaces to be relit to improve alignment; (ii) PS provides surface gradients that render the smoothness term in MVS unnecessary, leading to truly data-driven, continuous depth estimates. This synergy is demonstrated in the quality of the resulting RGB appearance, 3D geometry, and 3D motion.

Copyright Notice