Back to Search Start Over

AutoRT: Embodied Foundation Models for Large Scale Orchestration of Robotic Agents

Authors :
Ahn, Michael
Dwibedi, Debidatta
Finn, Chelsea
Arenas, Montse Gonzalez
Gopalakrishnan, Keerthana
Hausman, Karol
Ichter, Brian
Irpan, Alex
Joshi, Nikhil
Julian, Ryan
Kirmani, Sean
Leal, Isabel
Lee, Edward
Levine, Sergey
Lu, Yao
Maddineni, Sharath
Rao, Kanishka
Sadigh, Dorsa
Sanketi, Pannag
Sermanet, Pierre
Vuong, Quan
Welker, Stefan
Xia, Fei
Xiao, Ted
Xu, Peng
Xu, Steve
Xu, Zhuo
Publication Year :
2024

Abstract

Foundation models that incorporate language, vision, and more recently actions have revolutionized the ability to harness internet scale data to reason about useful tasks. However, one of the key challenges of training embodied foundation models is the lack of data grounded in the physical world. In this paper, we propose AutoRT, a system that leverages existing foundation models to scale up the deployment of operational robots in completely unseen scenarios with minimal human supervision. AutoRT leverages vision-language models (VLMs) for scene understanding and grounding, and further uses large language models (LLMs) for proposing diverse and novel instructions to be performed by a fleet of robots. Guiding data collection by tapping into the knowledge of foundation models enables AutoRT to effectively reason about autonomy tradeoffs and safety while significantly scaling up data collection for robot learning. We demonstrate AutoRT proposing instructions to over 20 robots across multiple buildings and collecting 77k real robot episodes via both teleoperation and autonomous robot policies. We experimentally show that such "in-the-wild" data collected by AutoRT is significantly more diverse, and that AutoRT's use of LLMs allows for instruction following data collection robots that can align to human preferences.<br />Comment: 26 pages, 9 figures, ICRA 2024 VLMNM Workshop

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2401.12963
Document Type :
Working Paper