Back to Search Start Over

Regularize! Don't Mix: Multi-Agent Reinforcement Learning without Explicit Centralized Structures

Authors :
Siu, Chapman
Traish, Jason
Da Xu, Richard Yi
Publication Year :
2021

Abstract

We propose using regularization for Multi-Agent Reinforcement Learning rather than learning explicit cooperative structures called {\em Multi-Agent Regularized Q-learning} (MARQ). Many MARL approaches leverage centralized structures in order to exploit global state information or removing communication constraints when the agents act in a decentralized manner. Instead of learning redundant structures which is removed during agent execution, we propose instead to leverage shared experiences of the agents to regularize the individual policies in order to promote structured exploration. We examine several different approaches to how MARQ can either explicitly or implicitly regularize our policies in a multi-agent setting. MARQ aims to address these limitations in the MARL context through applying regularization constraints which can correct bias in off-policy out-of-distribution agent experiences and promote diverse exploration. Our algorithm is evaluated on several benchmark multi-agent environments and we show that MARQ consistently outperforms several baselines and state-of-the-art algorithms; learning in fewer steps and converging to higher returns.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2109.09038
Document Type :
Working Paper