Back to Search Start Over

Human-Timescale Adaptation in an Open-Ended Task Space

Authors :
Adaptive Agent Team
Bauer, Jakob
Baumli, Kate
Baveja, Satinder
Behbahani, Feryal
Bhoopchand, Avishkar
Bradley-Schmieg, Nathalie
Chang, Michael
Clay, Natalie
Collister, Adrian
Dasagi, Vibhavari
Gonzalez, Lucy
Gregor, Karol
Hughes, Edward
Kashem, Sheleem
Loks-Thompson, Maria
Openshaw, Hannah
Parker-Holder, Jack
Pathak, Shreya
Perez-Nieves, Nicolas
Rakicevic, Nemanja
Rocktäschel, Tim
Schroecker, Yannick
Sygnowski, Jakub
Tuyls, Karl
York, Sarah
Zacherl, Alexander
Zhang, Lei
Publication Year :
2023

Abstract

Foundation models have shown impressive adaptation and scalability in supervised and self-supervised learning problems, but so far these successes have not fully translated to reinforcement learning (RL). In this work, we demonstrate that training an RL agent at scale leads to a general in-context learning algorithm that can adapt to open-ended novel embodied 3D problems as quickly as humans. In a vast space of held-out environment dynamics, our adaptive agent (AdA) displays on-the-fly hypothesis-driven exploration, efficient exploitation of acquired knowledge, and can successfully be prompted with first-person demonstrations. Adaptation emerges from three ingredients: (1) meta-reinforcement learning across a vast, smooth and diverse task distribution, (2) a policy parameterised as a large-scale attention-based memory architecture, and (3) an effective automated curriculum that prioritises tasks at the frontier of an agent's capabilities. We demonstrate characteristic scaling laws with respect to network size, memory length, and richness of the training task distribution. We believe our results lay the foundation for increasingly general and adaptive RL agents that perform well across ever-larger open-ended domains.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2301.07608
Document Type :
Working Paper