Back to Search Start Over

Path Planning for Outdoor Mobile Robots Based on IDDQN

Authors :
Jiang Shuhai
Sun Shangjie
Li Cun
Source :
IEEE Access, Vol 12, Pp 51012-51025 (2024)
Publication Year :
2024
Publisher :
IEEE, 2024.

Abstract

Path planning is one of the research hotspots for outdoor mobile robots. This paper addresses the issues of slow convergence and low accuracy in the Double Deep Q Network (DDQN) method in environments with many obstacles in the context of deep reinforcement learning. A new algorithm, Improve Double Deep Q Network (IDDQN), is proposed, which utilizes second-order temporal difference methods and a binary tree data structure to improve the DDQN method. The improved method evaluates the actions of the current robot using second-order temporal difference methods and employs a binary tree structure to store the results obtained from these methods, replacing the traditional experience pool structure. The environment is constructed using a grid method, programmed in the Python language, with two two-dimensional grid maps created for simple and complex environments. DDQN and four related deep reinforcement learning methods, such as Multi-step updates and Experience Classification Double Deep Q Network (ECMS-DDQN), are compared through simulation experiments with the IDDQN method. Simulation results indicate that the IDDQN method improves various path planning metrics compared to the DDQN method and other relevant reinforcement learning methods. In the simple environment, IDDQN method exhibits a 26.89% improvement in step convergence time, a 22.58% improvement in reward convergence time, and a 10.30% improvement in average reward value after convergence compared to the original DDQN algorithm. It also outperforms other simulated methods in the simple environment, although the difference is not significant. In the complex environment, the IDDQN method avoids falling into local optima compared to other methods, demonstrating the accuracy of its strategy in complex environments. Other methods show artificially high average reward values after converging in local optima, lacking reference value. In the complex environment, IDDQN method exhibits a 33.22% improvement in step convergence time and a 25.47% improvement in reward convergence time compared to the original DDQN algorithm, clearly surpassing other participating simulated methods. The data above indicate that the IDDQN method improves both convergence speed and accuracy compared to the DDQN method and the relevant improvement methods simulated in this paper. Particularly in environments with many obstacles, the performance improvement is evident, allowing for effective path planning in such environments.

Details

Language :
English
ISSN :
21693536
Volume :
12
Database :
Directory of Open Access Journals
Journal :
IEEE Access
Publication Type :
Academic Journal
Accession number :
edsdoj.0b5606244b274c5ba6bbdc30ed0e5273
Document Type :
article
Full Text :
https://doi.org/10.1109/ACCESS.2024.3354075