1. Sequential robot imitation learning from observations
- Author
-
Ajay Kumar Tanwani, Sylvain Calinon, Jonathan Lee, Andy Yan, and Ken Goldberg
- Subjects
business.industry ,Computer science ,Applied Mathematics ,Mechanical Engineering ,Imitation learning ,Robot learning ,Sequential structure ,Artificial Intelligence ,Modeling and Simulation ,Robot ,Hidden semi-Markov model ,Artificial intelligence ,Electrical and Electronic Engineering ,business ,Software - Abstract
This paper presents a framework to learn the sequential structure in the demonstrations for robot imitation learning. We first present a family of task-parameterized hidden semi-Markov models that extracts invariant segments (also called sub-goals or options) from demonstrated trajectories, and optimally follows the sampled sequence of states from the model with a linear quadratic tracking controller. We then extend the concept to learning invariant segments from visual observations that are sequenced together for robot imitation. We present Motion2Vec that learns a deep embedding space by minimizing a metric learning loss in a Siamese network: images from the same action segment are pulled together while being pushed away from randomly sampled images of other segments, and a time contrastive loss is used to preserve the temporal ordering of the images. The trained embeddings are segmented with a recurrent neural network, and subsequently used for decoding the end-effector pose of the robot. We first show its application to a pick-and-place task with the Baxter robot while avoiding a moving obstacle from four kinesthetic demonstrations only, followed by suturing task imitation from publicly available suturing videos of the JIGSAWS dataset with state-of-the-art [Formula: see text]% segmentation accuracy and [Formula: see text] cm error in position per observation on the test set.
- Published
- 2021