Generating and Ranking Diverse Multi-Character Interactions

 

Our novel ‘generate-and-rank’ approach rapidly and semi-automatically generates data-driven fight scenes from high-level text descriptions composed of simple clauses and phrases. From a database of captured motions and its associated motion graph, we first generate a ‘cascade’ of plausible scenes.

November 19, 2014
ACM SIGGRAPH Asia 2014

 

Authors

Jungdam Won (Disney Research/Seoul National University)

Kyungho Lee (Disney Research/Seoul National University)

Carol O’Sullivan (Disney Research)

Jessica Hodgins (Disney Research)

Jehee Lee (Disney Research/Seoul National University)

Generating and Ranking Diverse Multi-Character Interactions

Abstract

In many application areas, such as animation for pre-visualizing movie sequences or choreography of dancing or other show performances, only a high-level description of the desired scene is provided as input, either written or verbal. Such sparsity, however, lends itself well to the creative process, as the choreographer, animator or director can be given more choice and control of the final scene. Animating scenes with multi-character interactions can be a particularly complex process, as there are many different constraints to enforce and actions to synchronize. Our novel ‘generate-and-rank’ approach rapidly and semi-automatically generates data-driven multi-character interaction scenes from high-level graphical descriptions composed of simple clauses and phrases. From a database of captured motions, we generate a multitude of plausible candidate scenes. We then efficiently and intelligently rank these scenes in order to recommend a small but high-quality and diverse selection to the user. This set can then be refined by re-ranking or by generating alternatives to specific interactions. While our approach is applicable to any scenes that depict multi-character interactions, we demonstrate its efficacy for choreographing fighting scenes and evaluate it in terms of performance and the diversity and coverage of the results.

Copyright Notice