Back to Search
Start Over
Prioritized experience replay based on dynamics priority
- Source :
- Scientific Reports, Vol 14, Iss 1, Pp 1-9 (2024)
- Publication Year :
- 2024
- Publisher :
- Nature Portfolio, 2024.
-
Abstract
- Abstract Experience replay has been instrumental in achieving significant advancements in reinforcement learning by increasing the utilization of data. To further improve the sampling efficiency, prioritized experience replay (PER) was proposed. This algorithm prioritizes experiences based on the temporal difference error (TD error), enabling the agent to learn from more valuable experiences stored in the experience pool. While various prioritized algorithms have been proposed, they ignored the dynamic changes of experience value during the training process, merely combining different priority criteria in a fixed or linear manner. In this paper, we present a novel prioritized experience replay algorithm called PERDP, which employs a dynamic priority adjustment framework. PERDP adaptively adjusts the weights of each criterion based on average priority level of the experience pool and evaluates experiences’ value according to current network. We apply this algorithm to the SAC model and conduct experiments in the OpenAI Gym experimental environment. The experiment results demonstrate that the PERDP exhibits superior convergence speed when compared to the PER.
Details
- Language :
- English
- ISSN :
- 20452322 and 20306458
- Volume :
- 14
- Issue :
- 1
- Database :
- Directory of Open Access Journals
- Journal :
- Scientific Reports
- Publication Type :
- Academic Journal
- Accession number :
- edsdoj.78c7d2e2030645808b85626bb0faa420
- Document Type :
- article
- Full Text :
- https://doi.org/10.1038/s41598-024-56673-3