Back to Search
Start Over
Multimodal signal processing and interaction for a driving simulator: Component-based architecture.
- Source :
- Journal on Multimodal User Interfaces; Mar2007, Vol. 1 Issue 1, p49-58, 10p
- Publication Year :
- 2007
-
Abstract
- In this paper we focus on the software design of a multimodal driving simulator that is based on both multimodal driver’s focus of attention detection as well as driver’s fatigue state detection and prediction. Capturing and interpreting the driver’s focus of attention and fatigue state is based on video data (e.g., facial expression, head movement, eye tracking). While the input multimodal interface relies on passive modalities only (also called attentive user interface), the output multimodal user interface includes several active output modalities for presenting alert messages including graphics and text on a mini-screen and in the windshield, sounds, speech and vibration (vibration wheel). Active input modalities are added in the meta-User Interface to let the user dynamically select the output modalities. The driving simulator is used as a case study for studying its software architecture based on multimodal signal processing and multimodal interaction components considering two software platforms, OpenInterface and ICARE. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 17837677
- Volume :
- 1
- Issue :
- 1
- Database :
- Complementary Index
- Journal :
- Journal on Multimodal User Interfaces
- Publication Type :
- Academic Journal
- Accession number :
- 49469868
- Full Text :
- https://doi.org/10.1007/BF02884432