Back to Search Start Over

Deep Reinforcement Learning for Collaborative Edge Computing in Vehicular Networks

Authors :
Li, Mushu
Gao, Jie
Zhao, Lian
Shen, Xuemin
Publication Year :
2020

Abstract

Mobile edge computing (MEC) is a promising technology to support mission-critical vehicular applications, such as intelligent path planning and safety applications. In this paper, a collaborative edge computing framework is developed to reduce the computing service latency and improve service reliability for vehicular networks. First, a task partition and scheduling algorithm (TPSA) is proposed to decide the workload allocation and schedule the execution order of the tasks offloaded to the edge servers given a computation offloading strategy. Second, an artificial intelligence (AI) based collaborative computing approach is developed to determine the task offloading, computing, and result delivery policy for vehicles. Specifically, the offloading and computing problem is formulated as a Markov decision process. A deep reinforcement learning technique, i.e., deep deterministic policy gradient, is adopted to find the optimal solution in a complex urban transportation network. By our approach, the service cost, which includes computing service latency and service failure penalty, can be minimized via the optimal workload assignment and server selection in collaborative computing. Simulation results show that the proposed AI-based collaborative computing approach can adapt to a highly dynamic environment with outstanding performance.<br />Comment: 31 pages, single column, 12 figures. Accepted in IEEE Transactions on Cognitive Communications and Networking

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2010.01722
Document Type :
Working Paper
Full Text :
https://doi.org/10.1109/TCCN.2020.3003036