Distinguishing Texture Edges from Object Boundaries in Video

 

We address this issue by introducing a simple, low-level, patch-consistency assumption that leverages the extra information present in video data to resolve this ambiguity.

January 12, 2013
IEEE Transactions on Image Processing 2013

 

Authors

Oliver Wang (Disney Research)

Martina Dumcke (Disney Research/ETH Joint M.Sc.)

Aljoscha Smolic (Disney Research)

Markus Gross (Disney Research/ETH Zurich)

Distinguishing Texture Edges from Object Boundaries in Video

Abstract

One of the most fundamental problems in image processing and computer vision is the inherent ambiguity that exists between texture edges and object boundaries in real-world images and video. Despite this ambiguity, many applications in computer vision and image processing often use image edge strength with the assumption that these edges approximate object depth boundaries. However, this assumption is often invalidated by real world data, and this discrepancy is a significant limitation in many of today’s image processing methods. We address this issue by introducing a simple, low-level, patch-consistency assumption that leverages the extra information present in video data to resolve this ambiguity. By analyzing how well patches can be modeled by simple transformations over time, we can obtain an indication of which image edges correspond to texture versus object edges. We validate our approach by presenting results on a variety of scene types and directly incorporating our augmented edge map into an existing optical flow-based application, showing that our method can trivially suppress the detrimental effects of strong texture edges. Our approach is simple to implement and has the potential to improve a wide range of image and video-based applications.

Copyright Notice