Back to Search
Start Over
Approximate Markov Perfect Equilibrium of Joint Offloading Policy for Multi-IV Using Reward-Shared Distributed Method
- Source :
- IEEE Transactions on Intelligent Vehicles; February 2024, Vol. 9 Issue: 2 p3658-3671, 14p
- Publication Year :
- 2024
-
Abstract
- In this article, we investigate the problem of optimizing the joint offloading policy in a distributed manner for multiple intelligent vehicles (IVs). During the journey in vehicular edge computing (VEC) networks, IVs continually optimize their joint offloading policy to minimize the long-term accumulated costs generated by executing computational tasks. The stochastic and repetitive interactions among IVs is modeled as a Markov game process. In this way, the optimization of the joint offloading policy is transformed to approximate a Markov perfect equilibrium in a general-sum Markov game. Moreover, we argue that training in the practical VEC networks using the classical centralized training and decentralized executing (CTDE) framework involves challenges of privacy and computational complexity. Motivated by these, we propose a reward-shared distributed policy optimization (RSDPO) method for the considered VEC networks to optimize the joint offloading policy. The experimental results demonstrate that the set of joint offloading policies using RSDPO approximates a Markov perfect equilibrium, and our RSDPO presents significant advantages in terms of converged latency and energy consumption compared with other methods.
Details
- Language :
- English
- ISSN :
- 23798858
- Volume :
- 9
- Issue :
- 2
- Database :
- Supplemental Index
- Journal :
- IEEE Transactions on Intelligent Vehicles
- Publication Type :
- Periodical
- Accession number :
- ejs66238526
- Full Text :
- https://doi.org/10.1109/TIV.2024.3352422