1. Deductive Reinforcement Learning for Visual Autonomous Urban Driving Navigation.
- Author
-
Huang, Changxin, Zhang, Ronghui, Ouyang, Meizi, Wei, Pengxu, Lin, Junfan, Su, Jiang, and Lin, Liang
- Subjects
- *
REINFORCEMENT learning , *AUTONOMOUS vehicles , *VISUAL learning , *DEEP learning , *TRAFFIC safety , *AERONAUTICAL navigation , *NAVIGATION - Abstract
Existing deep reinforcement learning (RL) are devoted to research applications on video games, e.g., The Open Racing Car Simulator (TORCS) and Atari games. However, it remains under-explored for vision-based autonomous urban driving navigation (VB-AUDN). VB-AUDN requires a sophisticated agent working safely in structured, changing, and unpredictable environments; otherwise, inappropriate operations may lead to irreversible or catastrophic damages. In this work, we propose a deductive RL (DeRL) to address this challenge. A deduction reasoner (DR) is introduced to endow the agent with ability to foresee the future and to promote policy learning. Specifically, DR first predicts future transitions through a parameterized environment model. Then, DR conducts self-assessment at the predicted trajectory to perceive the consequences of current policy resulting in a more reliable decision-making process. Additionally, a semantic encoder module (SEM) is designed to extract compact driving representation from the raw images, which is robust to the changes of the environment. Extensive experimental results demonstrate that DeRL outperforms the state-of-the-art model-free RL approaches on the public CAR Learning to Act (CARLA) benchmark and presents a superior performance on success rate and driving safety for goal-directed navigation. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF