Back to Search Start Over

When does return-conditioned supervised learning work for offline reinforcement learning?

Authors :
Brandfonbrener, David
Bietti, Alberto
Buckman, Jacob
Laroche, Romain
Bruna, Joan
Publication Year :
2022

Abstract

Several recent works have proposed a class of algorithms for the offline reinforcement learning (RL) problem that we will refer to as return-conditioned supervised learning (RCSL). RCSL algorithms learn the distribution of actions conditioned on both the state and the return of the trajectory. Then they define a policy by conditioning on achieving high return. In this paper, we provide a rigorous study of the capabilities and limitations of RCSL, something which is crucially missing in previous work. We find that RCSL returns the optimal policy under a set of assumptions that are stronger than those needed for the more traditional dynamic programming-based algorithms. We provide specific examples of MDPs and datasets that illustrate the necessity of these assumptions and the limits of RCSL. Finally, we present empirical evidence that these limitations will also cause issues in practice by providing illustrative experiments in simple point-mass environments and on datasets from the D4RL benchmark.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2206.01079
Document Type :
Working Paper