Back to Search Start Over

CIMRL: Combining IMitation and Reinforcement Learning for Safe Autonomous Driving

Authors :
Booher, Jonathan
Rohanimanesh, Khashayar
Xu, Junhong
Isenbaev, Vladislav
Balakrishna, Ashwin
Gupta, Ishan
Liu, Wei
Petiushko, Aleksandr
Publication Year :
2024

Abstract

Modern approaches to autonomous driving rely heavily on learned components trained with large amounts of human driving data via imitation learning. However, these methods require large amounts of expensive data collection and even then face challenges with safely handling long-tail scenarios and compounding errors over time. At the same time, pure Reinforcement Learning (RL) methods can fail to learn performant policies in sparse, constrained, and challenging-to-define reward settings like driving. Both of these challenges make deploying purely cloned policies in safety critical applications like autonomous vehicles challenging. In this paper we propose Combining IMitation and Reinforcement Learning (CIMRL) approach - a framework that enables training driving policies in simulation through leveraging imitative motion priors and safety constraints. CIMRL does not require extensive reward specification and improves on the closed loop behavior of pure cloning methods. By combining RL and imitation, we demonstrate that our method achieves state-of-the-art results in closed loop simulation driving benchmarks.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.08878
Document Type :
Working Paper