1. Explanation for Trajectory Planning using Multi-modal Large Language Model for Autonomous Driving
- Author
-
Yamazaki, Shota, Zhang, Chenyu, Nanri, Takuya, Shigekane, Akio, Wang, Siyuan, Nishiyama, Jo, Chu, Tao, and Yokosawa, Kohei
- Subjects
Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Robotics - Abstract
End-to-end style autonomous driving models have been developed recently. These models lack interpretability of decision-making process from perception to control of the ego vehicle, resulting in anxiety for passengers. To alleviate it, it is effective to build a model which outputs captions describing future behaviors of the ego vehicle and their reason. However, the existing approaches generate reasoning text that inadequately reflects the future plans of the ego vehicle, because they train models to output captions using momentary control signals as inputs. In this study, we propose a reasoning model that takes future planning trajectories of the ego vehicle as inputs to solve this limitation with the dataset newly collected., Comment: Accepted and presented at ECCV 2024 2nd Workshop on Vision-Centric Autonomous Driving (VCAD) on September 30, 2024. 13 pages, 5 figures
- Published
- 2024