1. Offloading Strategy Based on Graph Neural Reinforcement Learning in Mobile Edge Computing.
- Author
-
Wang, Tao, Ouyang, Xue, Sun, Dingmi, Chen, Yimin, and Li, Hao
- Subjects
REINFORCEMENT learning ,DEEP reinforcement learning ,MOBILE computing ,GRAPH neural networks ,EDGE computing ,MOBILE learning ,CHARGE carrier mobility - Abstract
In the mobile edge computing (MEC) architecture, base stations with computational capabilities are subject to service coverage limitations, and the mobility of devices leads to dynamic changes in their connections, directly impacting the offloading decisions of agents. The connections between base stations and mobile devices, as well as the connections between base stations themselves, are abstracted into an MEC structural diagram due to the difficulty of deep reinforcement learning (DRL) in capturing the complex relationships between nodes and their multi-order neighboring nodes in the graph; decisions solely generated by DRL have limitations. To address this issue, this study proposes a hierarchical mechanism strategy based on Graph Neural Reinforcement Learning (M-GNRL) under multiple constraints. Specifically, the MEC structural graph constructed with the current device as an observation point aggregates to learn node features, thus comprehensively considering the contextual information of nodes, and the learned graph information serves as the environment for deep reinforcement learning, effectively integrating a graph neural network (GNN) with DRL. In the M-GNRL strategy, edge features from GNN are introduced into the architecture of the DRL network to enhance the accuracy of agents' decision-making. Additionally, this study proposes an updated algorithm to obtain graph data that change with observation points. Comparative experiments demonstrate that the M-GNRL algorithm outperforms other baseline algorithms in terms of system cost and convergence performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF