Animating an Autonomous 3D Talking Avatar

 

One of the main challenges with embodying an agent is annotating how and when motions can be played and composed together in real-time, without any visual artifact.

March 13, 2019
arxiv 2019

 

Authors

Dominik Borer (Disney Research/ETH Joint PhD)

Dominic Lutz (Disney Research)

Robert W. Sumner (Disney Research)

Martin Guay (Disney Research)

Animating an Autonomous 3D Talking Avatar

Abstract

One of the main challenges with embodying an agent is annotating how and when motions can be played and composed together in real-time, without any visual artifact. The inherent problem is to do so—for a large amount of motions—without introducing mistakes in the annotation. To our knowledge, there is no automatic method that can process animations and automatically label actions and compatibility between them. In practice, a state machine where clips are the actions is created manually by setting connections between the states with the timing parameters for these connections. Authoring this state machine for a large amount of motions leads to a visual overflow, and increases the amount of possible mistakes. In consequence, agent embodiments are left with little variations and quickly become repetitive. In this paper, we address this problem with a compact taxonomy of chit chat behaviors, that we can utilize to simplify and partially automate the graph authoring process. We measured the time required to label actions of an embodiment using our simple interface, compared to the standard state machine interface in unreal engine, and found that our approach is 7 times faster. We believe that our labeling approach could be a path to automated labeling: once a sub-set of motions are labeled (using our interface), we could learn a prediction that could attribute a label to new clips—allowing to really scale up virtual agent embodiments.

Copyright Notice