Back to Search
Start Over
Q-Transformer: Scalable Offline Reinforcement Learning via Autoregressive Q-Functions
- Publication Year :
- 2023
-
Abstract
- In this work, we present a scalable reinforcement learning method for training multi-task policies from large offline datasets that can leverage both human demonstrations and autonomously collected data. Our method uses a Transformer to provide a scalable representation for Q-functions trained via offline temporal difference backups. We therefore refer to the method as Q-Transformer. By discretizing each action dimension and representing the Q-value of each action dimension as separate tokens, we can apply effective high-capacity sequence modeling techniques for Q-learning. We present several design decisions that enable good performance with offline RL training, and show that Q-Transformer outperforms prior offline RL algorithms and imitation learning techniques on a large diverse real-world robotic manipulation task suite. The project's website and videos can be found at https://qtransformer.github.io<br />Comment: See website at https://qtransformer.github.io
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2309.10150
- Document Type :
- Working Paper