Back to Search Start Over

RoVi-Aug: Robot and Viewpoint Augmentation for Cross-Embodiment Robot Learning

Authors :
Chen, Lawrence Yunliang
Xu, Chenfeng
Dharmarajan, Karthik
Irshad, Zubair
Cheng, Richard
Keutzer, Kurt
Tomizuka, Masayoshi
Vuong, Quan
Goldberg, Ken
Publication Year :
2024

Abstract

Scaling up robot learning requires large and diverse datasets, and how to efficiently reuse collected data and transfer policies to new embodiments remains an open question. Emerging research such as the Open-X Embodiment (OXE) project has shown promise in leveraging skills by combining datasets including different robots. However, imbalances in the distribution of robot types and camera angles in many datasets make policies prone to overfit. To mitigate this issue, we propose RoVi-Aug, which leverages state-of-the-art image-to-image generative models to augment robot data by synthesizing demonstrations with different robots and camera views. Through extensive physical experiments, we show that, by training on robot- and viewpoint-augmented data, RoVi-Aug can zero-shot deploy on an unseen robot with significantly different camera angles. Compared to test-time adaptation algorithms such as Mirage, RoVi-Aug requires no extra processing at test time, does not assume known camera angles, and allows policy fine-tuning. Moreover, by co-training on both the original and augmented robot datasets, RoVi-Aug can learn multi-robot and multi-task policies, enabling more efficient transfer between robots and skills and improving success rates by up to 30%.<br />Comment: CoRL 2024 (Oral)

Subjects

Subjects :
Computer Science - Robotics

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.03403
Document Type :
Working Paper