Back to Search
Start Over
Distributed Energy-Efficient Multi-UAV Navigation for Long-Term Communication Coverage by Deep Reinforcement Learning.
- Source :
- IEEE Transactions on Mobile Computing; Jun2020, Vol. 19 Issue 6, p1274-1285, 12p
- Publication Year :
- 2020
-
Abstract
- In this paper, we aim to design a fully-distributed control solution to navigate a group of unmanned aerial vehicles (UAVs), as the mobile Base Stations (BSs) to fly around a target area, to provide long-term communication coverage for the ground mobile users. Different from existing solutions that mainly solve the problem from optimization perspectives, we proposed a decentralized deep reinforcement learning (DRL) based framework to control each UAV in a distributed manner. Our goal is to maximize the temporal average coverage score achieved by all UAVs in a task, maximize the geographical fairness of all considered point-of-interests (PoIs), and minimize the total energy consumptions, while keeping them connected and not flying out of the area border. We designed the state, observation, action space, and reward in an explicit manner, and model each UAV by deep neural networks (DNNs). We conducted extensive simulations and found the appropriate set of hyperparameters, including experience replay buffer size, number of neural units for two fully-connected hidden layers of actor, critic, and their target networks, and the discount factor for remembering the future reward. The simulation results justified the superiority of the proposed model over the state-of-the-art DRL-EC $^3$ 3 approach based on deep deterministic policy gradient (DDPG), and three other baselines. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 15361233
- Volume :
- 19
- Issue :
- 6
- Database :
- Complementary Index
- Journal :
- IEEE Transactions on Mobile Computing
- Publication Type :
- Academic Journal
- Accession number :
- 143174361
- Full Text :
- https://doi.org/10.1109/TMC.2019.2908171