Back to Search Start Over

Bilevel reinforcement learning via the development of hyper-gradient without lower-level convexity

Authors :
Yang, Yan
Gao, Bin
Yuan, Ya-xiang
Publication Year :
2024

Abstract

Bilevel reinforcement learning (RL), which features intertwined two-level problems, has attracted growing interest recently. The inherent non-convexity of the lower-level RL problem is, however, to be an impediment to developing bilevel optimization methods. By employing the fixed point equation associated with the regularized RL, we characterize the hyper-gradient via fully first-order information, thus circumventing the assumption of lower-level convexity. This, remarkably, distinguishes our development of hyper-gradient from the general AID-based bilevel frameworks since we take advantage of the specific structure of RL problems. Moreover, we propose both model-based and model-free bilevel reinforcement learning algorithms, facilitated by access to the fully first-order hyper-gradient. Both algorithms are provable to enjoy the convergence rate $\mathcal{O}(\epsilon^{-1})$. To the best of our knowledge, this is the first time that AID-based bilevel RL gets rid of additional assumptions on the lower-level problem. In addition, numerical experiments demonstrate that the hyper-gradient indeed serves as an integration of exploitation and exploration.<br />Comment: 43 pages, 1 figure, 1 table

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.19697
Document Type :
Working Paper