Back to Search
Start Over
UAV Coverage Path Planning With Limited Battery Energy Based on Improved Deep Double Q-network.
- Source :
- International Journal of Control, Automation & Systems; Aug2024, Vol. 22 Issue 8, p2591-2601, 11p
- Publication Year :
- 2024
-
Abstract
- In response to the increasingly complex problem of patrolling urban areas, the utilization of deep reinforcement learning algorithms for autonomous unmanned aerial vehicle (UAV) coverage path planning (CPP) has gradually become a research hotspot. CPP's solution needs to consider several complex factors, including landing area, target area coverage and limited battery capacity. Consequently, based on incomplete environmental information, policy learned by sample inefficient deep reinforcement learning algorithms are prone to getting trapped in local optima. To enhance the quality of experience data, a novel reward is proposed to guide UAVs in efficiently traversing the target area under battery limitations. Subsequently, to improve the sample efficiency of deep reinforcement learning algorithms, this paper introduces a novel dynamic soft update method, incorporates the prioritized experience replay mechanism, and presents an improved deep double Q-network (IDDQN) algorithm. Finally, simulation experiments conducted on two different grid maps demonstrate that IDDQN outperforms DDQN significantly. Our method simultaneously enhances the algorithm's sample efficiency and safety performance, thereby enabling UAVs to cover a larger number of target areas. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 15986446
- Volume :
- 22
- Issue :
- 8
- Database :
- Complementary Index
- Journal :
- International Journal of Control, Automation & Systems
- Publication Type :
- Academic Journal
- Accession number :
- 178805215
- Full Text :
- https://doi.org/10.1007/s12555-023-0724-9