Back to Search Start Over

Double Deep Q-Network Based Distributed Resource Matching Algorithm for D2D Communication.

Authors :
Yuan, Yazhou
Li, Zhijie
Liu, Zhixin
Yang, Yi
Guan, Xinping
Source :
IEEE Transactions on Vehicular Technology; Jan2022, Vol. 71 Issue 1, p984-993, 10p
Publication Year :
2022

Abstract

Device-to-Device (D2D) communication with short communication distance is an efficient way to improve spectrum efficiency and mitigate interference. To realize the optimal resource configuration including wireless channel matching and power allocation, a distributed resource matching scheme is proposed based on deep reinforcement learning(DRL). The reward is defined as the difference of achieve rate of D2D users and the consumed power, which is limited by the Signal to Interference plus Noise Ratio (SINR) of the other cellular users on the current channel. The proposed algorithm maximizes the D2D throughput and energy efficiency in a distributed manner, without online coordination and message exchange between users. The considered resource allocation problem is formulated as a random non-cooperative game with multiple players (D2D pairs), where each player is a learning agent, whose task is to learn its best strategy based on locally observed information, multi-user communication resource matching algorithm is proposed based on a Double Deep Q-network (DDQN), where the total cellular throughput and user energy efficiency could converge to the Nash equilibrium (NE) under the mixed strategy. Simulation results show that the proposed algorithm can improve the communication rate and energy efficiency of each user by selecting the optimal strategy, and has better convergence performance compared with existing schemes. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00189545
Volume :
71
Issue :
1
Database :
Complementary Index
Journal :
IEEE Transactions on Vehicular Technology
Publication Type :
Academic Journal
Accession number :
154862279
Full Text :
https://doi.org/10.1109/TVT.2021.3130159