Back to Search Start Over

Evaluating model-based planning and planner amortization for continuous control

Authors :
Byravan, Arunkumar
Hasenclever, Leonard
Trochim, Piotr
Mirza, Mehdi
Ialongo, Alessandro Davide
Tassa, Yuval
Springenberg, Jost Tobias
Abdolmaleki, Abbas
Heess, Nicolas
Merel, Josh
Riedmiller, Martin
Publication Year :
2021

Abstract

There is a widespread intuition that model-based control methods should be able to surpass the data efficiency of model-free approaches. In this paper we attempt to evaluate this intuition on various challenging locomotion tasks. We take a hybrid approach, combining model predictive control (MPC) with a learned model and model-free policy learning; the learned policy serves as a proposal for MPC. We find that well-tuned model-free agents are strong baselines even for high DoF control problems but MPC with learned proposals and models (trained on the fly or transferred from related tasks) can significantly improve performance and data efficiency in hard multi-task/multi-goal settings. Finally, we show that it is possible to distil a model-based planner into a policy that amortizes the planning computation without any loss of performance. Videos of agents performing different tasks can be seen at https://sites.google.com/view/mbrl-amortization/home.<br />Comment: 9 pages main text, 30 pages with references and appendix including several ablations and additional experiments. Submitted to ICLR 2022

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2110.03363
Document Type :
Working Paper