Back to Search
Start Over
On Centralized Critics in Multi-Agent Reinforcement Learning
- Source :
- Journal of Artificial Intelligence Research 77 (2023): 295-354
- Publication Year :
- 2024
-
Abstract
- Centralized Training for Decentralized Execution where agents are trained offline in a centralized fashion and execute online in a decentralized manner, has become a popular approach in Multi-Agent Reinforcement Learning (MARL). In particular, it has become popular to develop actor-critic methods that train decentralized actors with a centralized critic where the centralized critic is allowed access global information of the entire system, including the true system state. Such centralized critics are possible given offline information and are not used for online execution. While these methods perform well in a number of domains and have become a de facto standard in MARL, using a centralized critic in this context has yet to be sufficiently analyzed theoretically or empirically. In this paper, we therefore formally analyze centralized and decentralized critic approaches, and analyze the effect of using state-based critics in partially observable environments. We derive theories contrary to the common intuition: critic centralization is not strictly beneficial, and using state values can be harmful. We further prove that, in particular, state-based critics can introduce unexpected bias and variance compared to history-based critics. Finally, we demonstrate how the theory applies in practice by comparing different forms of critics on a wide range of common multi-agent benchmarks. The experiments show practical issues such as the difficulty of representation learning with partial observability, which highlights why the theoretical problems are often overlooked in the literature.
- Subjects :
- Computer Science - Artificial Intelligence
Subjects
Details
- Database :
- arXiv
- Journal :
- Journal of Artificial Intelligence Research 77 (2023): 295-354
- Publication Type :
- Report
- Accession number :
- edsarx.2408.14597
- Document Type :
- Working Paper