Back to Search Start Over

Vision-Language Models can Identify Distracted Driver Behavior from Naturalistic Videos

Authors :
Hasan, Md Zahid
Chen, Jiajing
Wang, Jiyang
Rahman, Mohammed Shaiqur
Joshi, Ameya
Velipasalar, Senem
Hegde, Chinmay
Sharma, Anuj
Sarkar, Soumik
Publication Year :
2023

Abstract

Recognizing the activities causing distraction in real-world driving scenarios is critical for ensuring the safety and reliability of both drivers and pedestrians on the roadways. Conventional computer vision techniques are typically data-intensive and require a large volume of annotated training data to detect and classify various distracted driving behaviors, thereby limiting their efficiency and scalability. We aim to develop a generalized framework that showcases robust performance with access to limited or no annotated training data. Recently, vision-language models have offered large-scale visual-textual pretraining that can be adapted to task-specific learning like distracted driving activity recognition. Vision-language pretraining models, such as CLIP, have shown significant promise in learning natural language-guided visual representations. This paper proposes a CLIP-based driver activity recognition approach that identifies driver distraction from naturalistic driving images and videos. CLIP's vision embedding offers zero-shot transfer and task-based finetuning, which can classify distracted activities from driving video data. Our results show that this framework offers state-of-the-art performance on zero-shot transfer and video-based CLIP for predicting the driver's state on two public datasets. We propose both frame-based and video-based frameworks developed on top of the CLIP's visual representation for distracted driving detection and classification tasks and report the results.<br />Comment: 15 pages, 7 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2306.10159
Document Type :
Working Paper