1. A Recurrent Variational Autoencoder for Human Motion Synthesis
- Author
-
Daniel Holden, Taku Komura, Jonathan Schwarz, Joe Yearsley, and Ikhsanul Habibie
- Subjects
Computer science ,business.industry ,0202 electrical engineering, electronic engineering, information engineering ,020207 software engineering ,020201 artificial intelligence & image processing ,Pattern recognition ,02 engineering and technology ,Artificial intelligence ,Human motion ,business ,Autoencoder - Abstract
We propose a novel generative model of human motion that can be trained using a large motion capture dataset, and allows users to produce animations from high-level control signals. As previous architectures struggle to predict motions far into the future due to the inherent ambiguity, we argue that a user-provided control signal is desirable for animators and greatly reduces the predictive error for long sequences. Thus, we formulate a framework which explicitly introduces an encoding of control signals into a variational inference framework trained to learn the manifold of human motion. As part of this framework, we formulate a prior on the latent space, which allows us to generate high-quality motion without providing frames from an existing sequence. We further model the sequential nature of the task by combining samples from a variational approximation to the intractable posterior with the control signal through a recurrent neural network (RNN) that synthesizes the motion. We show that our system can predict the movements of the human body over long horizons more accurately than state-of-the art methods. Finally, the design of our system considers practical use cases and thus provides a competitive approach to motion synthesis.
- Published
- 2017
- Full Text
- View/download PDF