Extending the Performance of Human Classifiers using a Viewpoint Specific Approach

 

This paper describes human classifiers that are ‘viewpoint specific’, meaning specific to subjects being observed by a particular camera in a particular scene.

January 6, 2015
IEEE Workshop on the Applications of Computer Vision (WACV) 2015

 

Authors

Endri Dibra (Disney Research/ETH Joint M.Sc.)

Jerome Maye (ETH Zurich)

Olga Diamanti (ETH Zurich)

Roland Siegwart (ETH Zurich)

Paul Beardsley (Disney Rresearch)

Extending the Performance of Human Classifiers using a Viewpoint Specific Approach

Abstract

The advantages of the approach are (a) improved human detection in the presence of perspective foreshortening from an elevated camera, (b) ability to handle partial occlusion of subjects e.g. partial occlusion by furniture in an indoor scene, and (c) ability to detect subjects when partially truncated at the top, bottom or sides of the image. Elevated camera views will typically generate truncated views for subjects at the image edges, but our viewpoint specific method handles such cases and thereby extends overall detection coverage. The approach is – (a) define a tiling on the ground plane of the 3D scene, (b) generate training images per tile using virtual humans, (c) train a classifier per tile (d) run the classifiers on the real scene. The approach would be prohibitive if each new deployment required real training images, but it is feasible because training is done with a virtual humans inserted into a scene model. The classifier is a linear SVM and HOGs. Experimental results provide a comparative analysis with existing algorithms to demonstrate the advantages described above.

Copyright Notice