Smooth Imitation Learning for Online Sequence Prediction

 

We present an online learning meta-algorithm that achieves fast and stable convergence to a good policy.

June 19, 2016
International Conference on Machine Learning (ICML) 2016

 

Authors

Hoang M. Le (California Institute of Technology)

Andrew Kang (California Institute of Technology)

Yisong Yue (California Institute of Technology)

Peter Carr (Disney Research)

Smooth Imitation Learning for Online Sequence Prediction

Abstract

We study the problem of smooth imitation learning, where the goal is to train a policy that can smoothly imitate demonstrated behavior in a dynamic and continuous environment. Since the mapping from context to behavior can often be very complex, we take a learning reduction approach to “reduce” smooth imitation learning to a regression problem using complex function classes that are regularized to ensure smoothness. We present an online learning meta-algorithm that achieves fast and stable convergence to a good policy. Our approach enjoys several attractive properties, including being fully deterministic, employing an adaptive learning rate that can provably yield significantly larger policy improvements compared to previous approaches, and the ability to ensure stable convergence for complex smooth policy classes. We evaluate our approach in a case study on automated camera control, where the goal is to smoothly imitate an expert camera operator as she follows the action during a sport event. Our empirical results demonstrate significant performance gains over previous approaches.

Copyright Notice