Back to Search Start Over

Sample Dropout: A Simple yet Effective Variance Reduction Technique in Deep Policy Optimization

Authors :
Lin, Zichuan
Wu, Xiapeng
Sun, Mingfei
Ye, Deheng
Fu, Qiang
Yang, Wei
Liu, Wei
Publication Year :
2023

Abstract

Recent success in Deep Reinforcement Learning (DRL) methods has shown that policy optimization with respect to an off-policy distribution via importance sampling is effective for sample reuse. In this paper, we show that the use of importance sampling could introduce high variance in the objective estimate. Specifically, we show in a principled way that the variance of importance sampling estimate grows quadratically with importance ratios and the large ratios could consequently jeopardize the effectiveness of surrogate objective optimization. We then propose a technique called sample dropout to bound the estimation variance by dropping out samples when their ratio deviation is too high. We instantiate this sample dropout technique on representative policy optimization algorithms, including TRPO, PPO, and ESPO, and demonstrate that it consistently boosts the performance of those DRL algorithms on both continuous and discrete action controls, including MuJoCo, DMControl and Atari video games. Our code is open-sourced at \url{https://github.com/LinZichuan/sdpo.git}.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2302.02299
Document Type :
Working Paper