1. Graph Attention Network-based Multi-agent Reinforcement Learning for Slicing Resource Management in Dense Cellular Network
- Author
-
Shao, Yan, Li, Rongpeng, Hu, Bing, Wu, Yingxiao, Zhao, Zhifeng, Zhang, Honggang, Shao, Yan, Li, Rongpeng, Hu, Bing, Wu, Yingxiao, Zhao, Zhifeng, and Zhang, Honggang
- Abstract
Network slicing (NS) management devotes to providing various services to meet distinct requirements over the same physical communication infrastructure and allocating resources on demands. Considering a dense cellular network scenario that contains several NS over multiple base stations (BSs), it remains challenging to design a proper real-time inter-slice resource management strategy, so as to cope with frequent BS handover and satisfy the fluctuations of distinct service requirements. In this paper, we propose to formulate this challenge as a multi-agent reinforcement learning (MARL) problem in which each BS represents an agent. Then, we leverage graph attention network (GAT) to strengthen the temporal and spatial cooperation between agents. Furthermore, we incorporate GAT into deep reinforcement learning (DRL) and correspondingly design an intelligent real-time inter-slice resource management strategy. More specially, we testify the universal effectiveness of GAT for advancing DRL in the multi-agent system, by applying GAT on the top of both the value-based method deep Q-network (DQN) and a combination of policy-based and value-based method advantage actor-critic (A2C). Finally, we verify the superiority of the GAT-based MARL algorithms through extensive simulations.
- Published
- 2021