Back to Search Start Over

Keep Doing What Worked: Behavioral Modelling Priors for Offline Reinforcement Learning

Authors :
Siegel, Noah Y.
Springenberg, Jost Tobias
Berkenkamp, Felix
Abdolmaleki, Abbas
Neunert, Michael
Lampe, Thomas
Hafner, Roland
Heess, Nicolas
Riedmiller, Martin
Source :
ICLR 2020
Publication Year :
2020

Abstract

Off-policy reinforcement learning algorithms promise to be applicable in settings where only a fixed data-set (batch) of environment interactions is available and no new experience can be acquired. This property makes these algorithms appealing for real world problems such as robot control. In practice, however, standard off-policy algorithms fail in the batch setting for continuous control. In this paper, we propose a simple solution to this problem. It admits the use of data generated by arbitrary behavior policies and uses a learned prior -- the advantage-weighted behavior model (ABM) -- to bias the RL policy towards actions that have previously been executed and are likely to be successful on the new task. Our method can be seen as an extension of recent work on batch-RL that enables stable learning from conflicting data-sources. We find improvements on competitive baselines in a variety of RL tasks -- including standard continuous control benchmarks and multi-task learning for simulated and real-world robots.

Details

Database :
arXiv
Journal :
ICLR 2020
Publication Type :
Report
Accession number :
edsarx.2002.08396
Document Type :
Working Paper