Back to Search Start Over

Message Action Adapter Framework in Multi-Agent Reinforcement Learning

Authors :
Bumjin Park
Jaesik Choi
Source :
Applied Sciences, Vol 14, Iss 21, p 10079 (2024)
Publication Year :
2024
Publisher :
MDPI AG, 2024.

Abstract

Multi-agent reinforcement learning (MARL) has demonstrated significant potential in enabling cooperative agents. The communication protocol, which is responsible for message exchange between agents, is crucial in cooperation. However, communicative MARL systems still face challenges due to the noisy messages in complex multi-agent decision processes. This issue often stems from the entangled representation of observations and messages in policy networks. To address this, we propose the Message Action Adapter Framework (MAAF), which first trains individual agents without message inputs and then adapts a residual action based on message components. This separation isolates the effect of messages on action inference. We explore how training the MAAF framework with model-agnostic message types and varying optimization strategies influences adaptation performance. The experimental results indicate that MAAF achieves competitive performance across multiple baselines despite utilizing only half of the available communication, and shows an average improvement of 7.6% over the full attention-based communication approach. Additional findings suggest that different message types result in significant performance variations, emphasizing the importance of environment-specific message types. We demonstrate how the proposed architecture separates communication channels, effectively isolating message contributions.

Details

Language :
English
ISSN :
20763417
Volume :
14
Issue :
21
Database :
Directory of Open Access Journals
Journal :
Applied Sciences
Publication Type :
Academic Journal
Accession number :
edsdoj.5ae5ec8b07b04ac3b07ca79999bb7037
Document Type :
article
Full Text :
https://doi.org/10.3390/app142110079