Back to Search Start Over

Q-Transformer: Scalable Offline Reinforcement Learning via Autoregressive Q-Functions

Authors :
Chebotar, Yevgen
Vuong, Quan
Irpan, Alex
Hausman, Karol
Xia, Fei
Lu, Yao
Kumar, Aviral
Yu, Tianhe
Herzog, Alexander
Pertsch, Karl
Gopalakrishnan, Keerthana
Ibarz, Julian
Nachum, Ofir
Sontakke, Sumedh
Salazar, Grecia
Tran, Huong T
Peralta, Jodilyn
Tan, Clayton
Manjunath, Deeksha
Singht, Jaspiar
Zitkovich, Brianna
Jackson, Tomas
Rao, Kanishka
Finn, Chelsea
Levine, Sergey
Publication Year :
2023

Abstract

In this work, we present a scalable reinforcement learning method for training multi-task policies from large offline datasets that can leverage both human demonstrations and autonomously collected data. Our method uses a Transformer to provide a scalable representation for Q-functions trained via offline temporal difference backups. We therefore refer to the method as Q-Transformer. By discretizing each action dimension and representing the Q-value of each action dimension as separate tokens, we can apply effective high-capacity sequence modeling techniques for Q-learning. We present several design decisions that enable good performance with offline RL training, and show that Q-Transformer outperforms prior offline RL algorithms and imitation learning techniques on a large diverse real-world robotic manipulation task suite. The project's website and videos can be found at https://qtransformer.github.io<br />Comment: See website at https://qtransformer.github.io

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2309.10150
Document Type :
Working Paper