1. GCMA: An Adaptive Multiagent Reinforcement Learning Framework With Group Communication for Complex and Similar Tasks Coordination
- Author
-
Peng, Kexing, Ma, Tinghuai, Yu, Xin, Rong, Huan, Qian, Yurong, and Al-Nabhan, Najla
- Abstract
Coordinating multiple agents with diverse tasks and changing goals without interference is a challenge. Multiagent reinforcement learning (MARL) aims to develop effective communication and joint policies using group learning. Some of the previous approaches required each agent to maintain a set of networks independently, resulting in no consideration of interactions. Joint communication work causes agents receiving information unrelated to their own tasks. Currently, agents with different task divisions are often grouped by action tendency, but this can lead to poor dynamic grouping. This article presents a two-phase solution for multiple agents, addressing these issues. The first phase develops heterogeneous agent communication joint policies using a group communication MARL framework (GCMA). The framework employs a periodic grouping strategy, reducing exploration and communication redundancy by dynamically assigning agent group hidden features through hypernetwork and graph communication. The scheme efficiently utilizes resources for adapting to multiple similar tasks. In the second phase, each agent's policy network is distilled into a generalized simple network, adapting to similar tasks with varying quantities and sizes. GCMA is tested in complex environments, such as StarCraft II and unmanned aerial vehicle (UAV) take-off, showing its well-performing for large-scale, coordinated tasks. It shows GCMA's effectiveness for solid generalization in multitask tests with simulated pedestrians.
- Published
- 2024
- Full Text
- View/download PDF