Back to Search Start Over

Model-Based Reinforcement Learning with SINDy

Authors :
Arora, Rushiv
da Silva, Bruno Castro
Moss, Eliot
Publication Year :
2022

Abstract

We draw on the latest advancements in the physics community to propose a novel method for discovering the governing non-linear dynamics of physical systems in reinforcement learning (RL). We establish that this method is capable of discovering the underlying dynamics using significantly fewer trajectories (as little as one rollout with $\leq 30$ time steps) than state of the art model learning algorithms. Further, the technique learns a model that is accurate enough to induce near-optimal policies given significantly fewer trajectories than those required by model-free algorithms. It brings the benefits of model-based RL without requiring a model to be developed in advance, for systems that have physics-based dynamics. To establish the validity and applicability of this algorithm, we conduct experiments on four classic control tasks. We found that an optimal policy trained on the discovered dynamics of the underlying system can generalize well. Further, the learned policy performs well when deployed on the actual physical system, thus bridging the model to real system gap. We further compare our method to state-of-the-art model-based and model-free approaches, and show that our method requires fewer trajectories sampled on the true physical system compared other methods. Additionally, we explored approximate dynamics models and found that they also can perform well.<br />Comment: 8 pages, 1 figure, 1 table, 1 algorithm, presented at the Decision Awareness in Reinforcement Learning workshop held at the International Conference on Machine Learning, 22 July 2022, Baltimore MD, USA

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2208.14501
Document Type :
Working Paper