1. D2D Mobile Relaying Meets NOMA—Part II: A Reinforcement Learning Perspective.
- Author
-
Driouech, Safaa, Sabir, Essaid, Ghogho, Mounir, Amhoud, El-Mehdi, Wang, Xianbin, and Chong, Peter
- Subjects
- *
REINFORCEMENT learning , *INTELLIGENCE levels , *DECISION making , *NASH equilibrium - Abstract
Structureless communications such as Device-to-Device (D2D) relaying are undeniably of paramount importance to improving the performance of today's mobile networks. Such a communication paradigm requires a certain level of intelligence at the device level, thereby allowing it to interact with the environment and make proper decisions. However, decentralizing decision-making may induce paradoxical outcomes, resulting in a drop in performance, which sustains the design of self-organizing yet efficient systems. We propose that each device decides either to directly connect to the eNodeB or get access via another device through a D2D link. In the first part of this article, we describe a biform game framework to analyze the proposed self-organized system's performance, under pure and mixed strategies. We use two reinforcement learning (RL) algorithms, enabling devices to self-organize and learn their pure/mixed equilibrium strategies in a fully distributed fashion. Decentralized RL algorithms are shown to play an important role in allowing devices to be self-organized and reach satisfactory performance with incomplete information or even under uncertainties. We point out through a simulation the importance of D2D relaying and assess how our learning schemes perform under slow/fast channel fading. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF