Back to Search Start Over

Off-Beat Multi-Agent Reinforcement Learning

Authors :
Qiu, Wei
Wang, Weixun
Wang, Rundong
An, Bo
Hu, Yujing
Obraztsova, Svetlana
Rabinovich, Zinovi
Hao, Jianye
Chen, Yingfeng
Fan, Changjie
Publication Year :
2022

Abstract

We investigate model-free multi-agent reinforcement learning (MARL) in environments where off-beat actions are prevalent, i.e., all actions have pre-set execution durations. During execution durations, the environment changes are influenced by, but not synchronised with, action execution. Such a setting is ubiquitous in many real-world problems. However, most MARL methods assume actions are executed immediately after inference, which is often unrealistic and can lead to catastrophic failure for multi-agent coordination with off-beat actions. In order to fill this gap, we develop an algorithmic framework for MARL with off-beat actions. We then propose a novel episodic memory, LeGEM, for model-free MARL algorithms. LeGEM builds agents' episodic memories by utilizing agents' individual experiences. It boosts multi-agent learning by addressing the challenging temporal credit assignment problem raised by the off-beat actions via our novel reward redistribution scheme, alleviating the issue of non-Markovian reward. We evaluate LeGEM on various multi-agent scenarios with off-beat actions, including Stag-Hunter Game, Quarry Game, Afforestation Game, and StarCraft II micromanagement tasks. Empirical results show that LeGEM significantly boosts multi-agent coordination and achieves leading performance and improved sample efficiency.<br />Fix typos

Details

Language :
English
Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....2f3a09be0a601820e93bf5fec4ae593e