1. Driving Cognitive Alertness Detecting Using Evoked Multimodal Physiological Signals Based on Uncertain Self-Supervised Learning
- Author
-
Pengbo Zhao, Cheng Lian, Bingrong Xu, Yixin Su, and Zhigang Zeng
- Subjects
Multimodal physiological signals ,self-supervised learning ,multiscale entropy ,multimodal uncertainty-aware ,multimodal cascaded attention ,Medical technology ,R855-855.5 ,Therapeutics. Pharmacology ,RM1-950 - Abstract
Multimodal physiological signals play a pivotal role in drivers’ perception of work stress. However, the scarcity of labels and the multitude of modalities render the utilization of physiological signals for driving cognitive alertness detection challenging. We thus propose a multimodal physiological signal detection model based on self-supervised learning. First, in order to mine the intrinsic information of data and enable data to highlight effective information, we introduce a multiscale entropy (MSE) evoked attention mechanism. Secondly, the multimodal patches undergo processing through a novel cascaded attention mechanism. This attention mechanism is rooted in patch-level interactions within each modality, progressively integrating and interacting with other modalities in a cascading manner, thereby mitigating computational complexity. Moreover, a multimodal uncertainty-aware module is devised to effectively cope with intricate variations in the data. This module enhances its generalization ability through the incorporation of uncertain resampling. Experiments were conducted on the DriveDB dataset and the CogPilot dataset with both the linear probing and the fine-tuning evaluation protocols. Experimental results in subject-dependent setting show that our model significantly outperforms previous competitive baselines. In the linear probing evaluation, our model achieves on average 6.26%, 6.64%, and 7.75% improvements in Accuracy (Acc), Recall (Rec), and F1 Score. It also outperforms other models by 7.96% in Acc, 9.13% in Rec, and 9.2% in F1 using the fine-tuning evaluation. Furthermore, our model also demonstrates robust performance in subject-independent setting.
- Published
- 2024
- Full Text
- View/download PDF