Back to Search Start Over

Reinforcement Learning Gradients as Vitamin for Online Finetuning Decision Transformers

Authors :
Yan, Kai
Schwing, Alexander G.
Wang, Yu-Xiong
Publication Year :
2024

Abstract

Decision Transformers have recently emerged as a new and compelling paradigm for offline Reinforcement Learning (RL), completing a trajectory in an autoregressive way. While improvements have been made to overcome initial shortcomings, online finetuning of decision transformers has been surprisingly under-explored. The widely adopted state-of-the-art Online Decision Transformer (ODT) still struggles when pretrained with low-reward offline data. In this paper, we theoretically analyze the online-finetuning of the decision transformer, showing that the commonly used Return-To-Go (RTG) that's far from the expected return hampers the online fine-tuning process. This problem, however, is well-addressed by the value function and advantage of standard RL algorithms. As suggested by our analysis, in our experiments, we hence find that simply adding TD3 gradients to the finetuning process of ODT effectively improves the online finetuning performance of ODT, especially if ODT is pretrained with low-reward offline data. These findings provide new directions to further improve decision transformers.<br />Comment: Accepted as NeurIPS 2024 spotlight. 33 pages, 26 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.24108
Document Type :
Working Paper