Back to Search
Start Over
Dynamic Obstacle Avoidance for USVs Using Cross-Domain Deep Reinforcement Learning and Neural Network Model Predictive Controller.
- Source :
-
Sensors (Basel, Switzerland) [Sensors (Basel)] 2023 Mar 29; Vol. 23 (7). Date of Electronic Publication: 2023 Mar 29. - Publication Year :
- 2023
-
Abstract
- This work presents a framework that allows Unmanned Surface Vehicles (USVs) to avoid dynamic obstacles through initial training on an Unmanned Ground Vehicle (UGV) and cross-domain retraining on a USV. This is achieved by integrating a Deep Reinforcement Learning (DRL) agent that generates high-level control commands and leveraging a neural network based model predictive controller (NN-MPC) to reach target waypoints and reject disturbances. A Deep Q Network (DQN) utilized in this framework is trained in a ground environment using a Turtlebot robot and retrained in a water environment using the BREAM USV in the Gazebo simulator to avoid dynamic obstacles. The network is then validated in both simulation and real-world tests. The cross-domain learning largely decreases the training time (28%) and increases the obstacle avoidance performance (70 more reward points) compared to pure water domain training. This methodology shows that it is possible to leverage the data-rich and accessible ground environments to train DRL agent in data-poor and difficult-to-access marine environments. This will allow rapid and iterative agent development without further training due to the change in environment or vehicle dynamics.
Details
- Language :
- English
- ISSN :
- 1424-8220
- Volume :
- 23
- Issue :
- 7
- Database :
- MEDLINE
- Journal :
- Sensors (Basel, Switzerland)
- Publication Type :
- Academic Journal
- Accession number :
- 37050633
- Full Text :
- https://doi.org/10.3390/s23073572