A Benchmark Dataset and Evaluation Methodology for Video Object Segmentation

 

We present a new benchmark dataset and evaluation methodology for the area of video object segmentation.

June 27, 2016
IEEE Conference on Computer Vision Pattern Recognition (CVPR) 2016

 

Authors

Federico Perazzi (Disney Research/ETH Joint PhD)

Jordi Pont (ETH Zurich)

Brian McWilliams (Disney Research)

Luc van Gool (ETH Zurich)

Markus Gross (Disney Research/ETH Zurich)

Alexander Sorkine-Hornung (Disney Research)

A Benchmark Dataset and Evaluation Methodology for Video Object Segmentation

Abstract

Over the years, datasets and benchmarks have proven their fundamental importance in computer vision research, enabling targeted progress and objective comparisons in many fields. At the same time, legacy datasets may impend the evolution of a field due to saturated algorithm performance and the lack of contemporary, high quality data. In this work we present a new benchmark dataset and evaluation methodology for the area of video object segmentation. The dataset, named DAVIS (Densely Annotated VIdeo Segmentation), consists of fifty high quality, Full HD video sequences, spanning multiple occurrences of common video object segmentation challenges such as occlusions, motion-blur and appearance changes. Each video is accompanied by densely annotated, pixel-accurate and per-frame ground truth segmentation. In addition, we provide a comprehensive analysis of several state-of-the-art segmentation approaches using three complementary metrics that measure the spatial extent of the segmentation, the accuracy of the silhouette contours and the temporal coherence. The results uncover strengths and weaknesses of current approaches, opening up promising directions for future works.

Copyright Notice