Back to Search Start Over

PPO-UE: Proximal Policy Optimization via Uncertainty-Aware Exploration

Authors :
Zhang, Qisheng
Guo, Zhen
Jøsang, Audun
Kaplan, Lance M.
Chen, Feng
Jeong, Dong H.
Cho, Jin-Hee
Publication Year :
2022

Abstract

Proximal Policy Optimization (PPO) is a highly popular policy-based deep reinforcement learning (DRL) approach. However, we observe that the homogeneous exploration process in PPO could cause an unexpected stability issue in the training phase. To address this issue, we propose PPO-UE, a PPO variant equipped with self-adaptive uncertainty-aware explorations (UEs) based on a ratio uncertainty level. The proposed PPO-UE is designed to improve convergence speed and performance with an optimized ratio uncertainty level. Through extensive sensitivity analysis by varying the ratio uncertainty level, our proposed PPO-UE considerably outperforms the baseline PPO in Roboschool continuous control tasks.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2212.06343
Document Type :
Working Paper