Back to Search Start Over

The $f$-Divergence Reinforcement Learning Framework

Authors :
Gong, Chen
He, Qiang
Bai, Yunpeng
Yang, Zhou
Chen, Xiaoyu
Hou, Xinwen
Zhang, Xianjie
Liu, Yu
Fan, Guoliang
Publication Year :
2021

Abstract

The framework of deep reinforcement learning (DRL) provides a powerful and widely applicable mathematical formalization for sequential decision-making. This paper present a novel DRL framework, termed \emph{$f$-Divergence Reinforcement Learning (FRL)}. In FRL, the policy evaluation and policy improvement phases are simultaneously performed by minimizing the $f$-divergence between the learning policy and sampling policy, which is distinct from conventional DRL algorithms that aim to maximize the expected cumulative rewards. We theoretically prove that minimizing such $f$-divergence can make the learning policy converge to the optimal policy. Besides, we convert the process of training agents in FRL framework to a saddle-point optimization problem with a specific $f$ function through Fenchel conjugate, which forms new methods for policy evaluation and policy improvement. Through mathematical proofs and empirical evaluation, we demonstrate that the FRL framework has two advantages: (1) policy evaluation and policy improvement processes are performed simultaneously and (2) the issues of overestimating value function are naturally alleviated. To evaluate the effectiveness of the FRL framework, we conduct experiments on Atari 2600 video games and show that agents trained in the FRL framework match or surpass the baseline DRL algorithms.<br />Comment: 17 pages, 4 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2109.11867
Document Type :
Working Paper