Back to Search
Start Over
Grounding Language with Visual Affordances over Unstructured Data
- Publication Year :
- 2022
-
Abstract
- Recent works have shown that Large Language Models (LLMs) can be applied to ground natural language to a wide variety of robot skills. However, in practice, learning multi-task, language-conditioned robotic skills typically requires large-scale data collection and frequent human intervention to reset the environment or help correcting the current policies. In this work, we propose a novel approach to efficiently learn general-purpose language-conditioned robot skills from unstructured, offline and reset-free data in the real world by exploiting a self-supervised visuo-lingual affordance model, which requires annotating as little as 1% of the total data with language. We evaluate our method in extensive experiments both in simulated and real-world robotic tasks, achieving state-of-the-art performance on the challenging CALVIN benchmark and learning over 25 distinct visuomotor manipulation tasks with a single policy in the real world. We find that when paired with LLMs to break down abstract natural language instructions into subgoals via few-shot prompting, our method is capable of completing long-horizon, multi-tier tasks in the real world, while requiring an order of magnitude less data than previous approaches. Code and videos are available at http://hulc2.cs.uni-freiburg.de<br />Comment: Accepted at the 2023 IEEE International Conference on Robotics and Automation (ICRA). Project website: http://hulc2.cs.uni-freiburg.de
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2210.01911
- Document Type :
- Working Paper