Back to Search
Start Over
State Distribution-Aware Sampling for Deep Q-Learning.
- Source :
- Neural Processing Letters; Oct2019, Vol. 50 Issue 2, p1649-1660, 12p
- Publication Year :
- 2019
-
Abstract
- A critical and challenging problem in reinforcement learning is how to learn the state-action value function from the experience replay buffer and simultaneously keep sample efficiency and faster convergence to a high quality solution. In prior works, transitions are uniformly sampled at random from the replay buffer or sampled based on their priority measured by temporal-difference (TD) error. However, these approaches do not fully take into consideration the intrinsic characteristics of transitions distribution in the state space and could result in redundant and unnecessary TD updates, slowing down the convergence of the learning procedure. To overcome this problem, we propose a novel state distribution-aware sampling method to balance the replay times for transitions with imbalanced distribution, which takes into account both the occurrence frequencies of transitions and the uncertainty of state-action values. Consequently, our approach could reduce the unnecessary TD updates and increase the TD updates for state-action value with more uncertainty, making the experience replay more effective and efficient. Extensive experiments are conducted on both classic control tasks and Atari 2600 games based on OpenAI gym platform and the experimental results demonstrate the effectiveness of our approach in comparison with the standard DQN approach. [ABSTRACT FROM AUTHOR]
- Subjects :
- REINFORCEMENT learning
SAMPLING methods
STATISTICAL sampling
Subjects
Details
- Language :
- English
- ISSN :
- 13704621
- Volume :
- 50
- Issue :
- 2
- Database :
- Complementary Index
- Journal :
- Neural Processing Letters
- Publication Type :
- Academic Journal
- Accession number :
- 139232955
- Full Text :
- https://doi.org/10.1007/s11063-018-9944-z