Back to Search Start Over

Enhancing Autonomous Vehicle Training with Language Model Integration and Critical Scenario Generation

Authors :
Tian, Hanlin
Reddy, Kethan
Feng, Yuxiang
Quddus, Mohammed
Demiris, Yiannis
Angeloudis, Panagiotis
Publication Year :
2024

Abstract

This paper introduces CRITICAL, a novel closed-loop framework for autonomous vehicle (AV) training and testing. CRITICAL stands out for its ability to generate diverse scenarios, focusing on critical driving situations that target specific learning and performance gaps identified in the Reinforcement Learning (RL) agent. The framework achieves this by integrating real-world traffic dynamics, driving behavior analysis, surrogate safety measures, and an optional Large Language Model (LLM) component. It is proven that the establishment of a closed feedback loop between the data generation pipeline and the training process can enhance the learning rate during training, elevate overall system performance, and augment safety resilience. Our evaluations, conducted using the Proximal Policy Optimization (PPO) and the HighwayEnv simulation environment, demonstrate noticeable performance improvements with the integration of critical case generation and LLM analysis, indicating CRITICAL's potential to improve the robustness of AV systems and streamline the generation of critical scenarios. This ultimately serves to hasten the development of AV agents, expand the general scope of RL training, and ameliorate validation efforts for AV safety.<br />Comment: 7 pages, 5 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.08570
Document Type :
Working Paper