Back to Search Start Over

Navigation Command Matching for Vision-based Autonomous Driving

Authors :
Jianru Xue
Yuxin Pan
Pengfei Zhang
Wanli Ouyang
Xingyu Chen
Jianwu Fang
Source :
ICRA
Publication Year :
2020
Publisher :
IEEE, 2020.

Abstract

Learning an optimal policy for autonomous driving task to confront with complex environment is a long- studied challenge. Imitative reinforcement learning is accepted as a promising approach to learn a robust driving policy through expert demonstrations and interactions with environments. However, this model utilizes non-smooth rewards, which have a negative impact on matching between navigation commands and trajectory (state-action pairs), and degrade the generalizability of an agent. Smooth rewards are crucial to discriminate actions generated from sub-optimal policy. In this paper, we propose a navigation command matching (NCM) model to address this issue. There are two key components in NCM, 1) a matching measurer produces smooth navigation rewards that measure matching between navigation commands and trajectory; 2) attention-guided agent performs actions given states where salient regions in RGB images (i.e. roadsides, lane markings and dynamic obstacles) are highlighted to amplify their influence on the final model. We obtain navigation rewards and store transitions to replay buffer after an episode, so NCM is able to discriminate actions generated from suboptimal policy. Experiments on CARLA driving benchmark show our proposed NCM outperforms previous state-of-the- art models on various tasks in terms of the percentage of successfully completed episodes. Moreover, our model improves generalizability of the agent and obtains good performance even in unseen scenarios.

Details

Database :
OpenAIRE
Journal :
2020 IEEE International Conference on Robotics and Automation (ICRA)
Accession number :
edsair.doi...........c25aeb4ea68546821963f18fe76cd73f