Back to Search Start Over

ACL-QL: Adaptive Conservative Level in Q-Learning for Offline Reinforcement Learning

Authors :
Wu, Kun
Zhao, Yinuo
Xu, Zhiyuan
Che, Zhengping
Yin, Chengxiang
Liu, Chi Harold
Qiu, Qinru
Feng, Feiferi
Tang, Jian
Publication Year :
2024

Abstract

Offline Reinforcement Learning (RL), which operates solely on static datasets without further interactions with the environment, provides an appealing alternative to learning a safe and promising control policy. The prevailing methods typically learn a conservative policy to mitigate the problem of Q-value overestimation, but it is prone to overdo it, leading to an overly conservative policy. Moreover, they optimize all samples equally with fixed constraints, lacking the nuanced ability to control conservative levels in a fine-grained manner. Consequently, this limitation results in a performance decline. To address the above two challenges in a united way, we propose a framework, Adaptive Conservative Level in Q-Learning (ACL-QL), which limits the Q-values in a mild range and enables adaptive control on the conservative level over each state-action pair, i.e., lifting the Q-values more for good transitions and less for bad transitions. We theoretically analyze the conditions under which the conservative level of the learned Q-function can be limited in a mild range and how to optimize each transition adaptively. Motivated by the theoretical analysis, we propose a novel algorithm, ACL-QL, which uses two learnable adaptive weight functions to control the conservative level over each transition. Subsequently, we design a monotonicity loss and surrogate losses to train the adaptive weight functions, Q-function, and policy network alternatively. We evaluate ACL-QL on the commonly used D4RL benchmark and conduct extensive ablation studies to illustrate the effectiveness and state-of-the-art performance compared to existing offline DRL baselines.<br />Comment: 19 pages, 4 figures, IEEE Transactions on Neural Networks and Learning Systems (2024)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2412.16848
Document Type :
Working Paper
Full Text :
https://doi.org/10.1109/TNNLS.2024.3497667