Back to Search
Start Over
Driver Behavior Recognition via Interwoven Deep Convolutional Neural Nets With Multi-Stream Inputs
- Source :
- IEEE Access, Vol 8, Pp 191138-191151 (2020), Zhang, C, Li, R, Kim, W, Yoon, D & Patras, P 2020, ' Driver Behavior Recognition via Interwoven Deep Convolutional Neural Nets With Multi-Stream Inputs ', IEEE Access, vol. 8, pp. 191138-191151 . https://doi.org/10.1109/ACCESS.2020.3032344
- Publication Year :
- 2020
- Publisher :
- IEEE, 2020.
-
Abstract
- Understanding driver activity is vital for in-vehicle systems that aim to reduce the incidence of car accidents rooted in cognitive distraction. Automating real-time behavior recognition while ensuring actions classification with high accuracy is however challenging, given the multitude of circumstances surrounding drivers, the unique traits of individuals, and the computational constraints imposed by in-vehicle embedded platforms. Prior work fails to jointly meet these runtime/accuracy requirements and mostly rely on a single sensing modality, which in turn can be a single point of failure. In this paper, we harness the exceptional feature extraction abilities of deep learning and propose a dedicated Interwoven Deep Convolutional Neural Network (InterCNN) architecture to tackle the problem of accurate classification of driver behaviors in real-time. The proposed solution exploits information from multi-stream inputs, i.e., in-vehicle cameras with different fields of view and optical flows computed based on recorded images, and merges through multiple fusion layers abstract features that it extracts. This builds a tight ensembling system, which significantly improves the robustness of the model. In addition, we introduce a temporal voting scheme based on historical inference instances, to enhance the classification accuracy. Experiments conducted with a dataset that we collect in a mock-up car environment demonstrate that the proposed InterCNN with MobileNet convolutional blocks can classify 9 different behaviors with 73.97% accuracy, and 5 'aggregated' behaviors with 81.66% accuracy. We further show that our architecture is highly computationally efficient, as it performs inferences within 15ms, which satisfies the real-time constraints of intelligent cars. Nevertheless, our InterCNN is robust to lossy input, as the classification remains accurate when two input streams are occluded.<br />Comment: 13 pages, 15 figures
- Subjects :
- FOS: Computer and information sciences
General Computer Science
Computer science
Computer Vision and Pattern Recognition (cs.CV)
Feature extraction
Computer Science - Computer Vision and Pattern Recognition
Inference
02 engineering and technology
Machine learning
computer.software_genre
Convolutional neural network
Robustness (computer science)
0502 economics and business
convolutional neural networks
0202 electrical engineering, electronic engineering, information engineering
General Materials Science
050210 logistics & transportation
Modality (human–computer interaction)
Artificial neural network
business.industry
Deep learning
05 social sciences
General Engineering
deep learning
Driver behavior recognition
020201 artificial intelligence & image processing
Artificial intelligence
lcsh:Electrical engineering. Electronics. Nuclear engineering
business
computer
lcsh:TK1-9971
Subjects
Details
- Language :
- English
- ISSN :
- 21693536
- Volume :
- 8
- Database :
- OpenAIRE
- Journal :
- IEEE Access
- Accession number :
- edsair.doi.dedup.....a2f1bca475dffdb1e96b0787baadc926
- Full Text :
- https://doi.org/10.1109/ACCESS.2020.3032344