Back to Search
Start Over
A distributed adaptive policy gradient method based on momentum for multi-agent reinforcement learning
- Source :
- Complex & Intelligent Systems, Vol 10, Iss 5, Pp 7297-7310 (2024)
- Publication Year :
- 2024
- Publisher :
- Springer, 2024.
-
Abstract
- Abstract Policy Gradient (PG) method is one of the most popular algorithms in Reinforcement Learning (RL). However, distributed adaptive variants of PG are rarely studied in multi-agent. For this reason, this paper proposes a distributed adaptive policy gradient algorithm (IS-DAPGM) incorporated with Adam-type updates and importance sampling technique. Furthermore, we also establish the theoretical convergence rate of $$\mathcal {O}(1/\sqrt{T})$$ O ( 1 / T ) , where T represents the number of iterations, it can match the convergence rate of the state-of-the-art centralized policy gradient methods. In addition, many experiments are conducted in a multi-agent environment, which is a modification on the basis of Particle world environment. By comparing with some other distributed PG methods and changing the number of agents, we verify the performance of IS-DAPGM is more efficient than the existing methods.
Details
- Language :
- English
- ISSN :
- 21994536 and 21986053
- Volume :
- 10
- Issue :
- 5
- Database :
- Directory of Open Access Journals
- Journal :
- Complex & Intelligent Systems
- Publication Type :
- Academic Journal
- Accession number :
- edsdoj.6396ec8713a04f02822c1906e004b340
- Document Type :
- article
- Full Text :
- https://doi.org/10.1007/s40747-024-01529-6