Back to Search
Start Over
Multimodel System for Driver Distraction Detection and Elimination
- Source :
- IEEE Access, Vol 10, Pp 72458-72469 (2022)
- Publication Year :
- 2022
- Publisher :
- IEEE, 2022.
-
Abstract
- On average 3,700 people lose their lives on roads every day due to car accidents as a result of drivers’ distraction. In this research, a proposed hybrid approach is presented. The approach is based on deep learning to detect the driver’s actions and eliminate the driver’s distraction as a packed solution. The detection is performed by analyzing the driver’s actions and his head pose. The elimination is made by using voice commands that are based on trigger words, speech to text, and text classification models to access the car’s functions such as the air-condition, radio, etc. The results of the driver’s actions classification showed 94.1% accuracy on the AUC benchmark database for driver distraction achieving the state-of-the-art accuracy on this benchmark. The results of the command to text classification is 95.19% while the results of the head pose estimation show a 6.21-degree MAE in face angles detection. With using our car commands dataset, the domain of the speech recognition output is more focused on car commands. The previously mentioned algorithms are beneficial to the safety of the driver. He can use his voice to operate the car accessories. His alert state is monitored and he is warned through an alarm if a distraction is detected. However, this research is not concerned with the detection of retinal abnormalities such as sleeping with eyes open. The results of real-time testing show 0.080 second response time for the driver’s behavior classification and command following with the use of graphical processing units.
Details
- Language :
- English
- ISSN :
- 21693536
- Volume :
- 10
- Database :
- Directory of Open Access Journals
- Journal :
- IEEE Access
- Publication Type :
- Academic Journal
- Accession number :
- edsdoj.6de53fea1c724fd7aa7952cbd2074592
- Document Type :
- article
- Full Text :
- https://doi.org/10.1109/ACCESS.2022.3188715