1. Residual Learning and Context Encoding for Adaptive Offline-to-Online Reinforcement Learning
- Author
-
Nakhaei, Mohammadreza, Scannell, Aidan, and Pajarinen, Joni
- Subjects
Computer Science - Machine Learning ,Computer Science - Robotics - Abstract
Offline reinforcement learning (RL) allows learning sequential behavior from fixed datasets. Since offline datasets do not cover all possible situations, many methods collect additional data during online fine-tuning to improve performance. In general, these methods assume that the transition dynamics remain the same during both the offline and online phases of training. However, in many real-world applications, such as outdoor construction and navigation over rough terrain, it is common for the transition dynamics to vary between the offline and online phases. Moreover, the dynamics may vary during the online fine-tuning. To address this problem of changing dynamics from offline to online RL we propose a residual learning approach that infers dynamics changes to correct the outputs of the offline solution. At the online fine-tuning phase, we train a context encoder to learn a representation that is consistent inside the current online learning environment while being able to predict dynamic transitions. Experiments in D4RL MuJoCo environments, modified to support dynamics' changes upon environment resets, show that our approach can adapt to these dynamic changes and generalize to unseen perturbations in a sample-efficient way, whilst comparison methods cannot., Comment: 10 pages, 5 figures, 1 table. Accepted at L4DC 2024
- Published
- 2024