Back to Search Start Over

Discretizing Continuous Action Space with Unimodal Probability Distributions for On-Policy Reinforcement Learning

Authors :
Zhu, Yuanyang
Wang, Zhi
Zhu, Yuanheng
Chen, Chunlin
Zhao, Dongbin
Publication Year :
2024

Abstract

For on-policy reinforcement learning, discretizing action space for continuous control can easily express multiple modes and is straightforward to optimize. However, without considering the inherent ordering between the discrete atomic actions, the explosion in the number of discrete actions can possess undesired properties and induce a higher variance for the policy gradient estimator. In this paper, we introduce a straightforward architecture that addresses this issue by constraining the discrete policy to be unimodal using Poisson probability distributions. This unimodal architecture can better leverage the continuity in the underlying continuous action space using explicit unimodal probability distributions. We conduct extensive experiments to show that the discrete policy with the unimodal probability distribution provides significantly faster convergence and higher performance for on-policy reinforcement learning algorithms in challenging control tasks, especially in highly complex tasks such as Humanoid. We provide theoretical analysis on the variance of the policy gradient estimator, which suggests that our attentively designed unimodal discrete policy can retain a lower variance and yield a stable learning process.<br />Comment: IEEE Transactions on Neural Networks and Learning Systems

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2408.00309
Document Type :
Working Paper