1. EAV: EEG-Audio-Video Dataset for Emotion Recognition in Conversational Contexts.
- Author
-
Lee, Min-Ho, Shomanov, Adai, Begim, Balgyn, Kabidenova, Zhuldyz, Nyssanbay, Aruna, Yazici, Adnan, and Lee, Seong-Whan
- Subjects
ARTIFICIAL neural networks ,EMOTION recognition ,EMOTIONAL state ,HUMAN behavior ,CALMNESS - Abstract
Understanding emotional states is pivotal for the development of next-generation human-machine interfaces. Human behaviors in social interactions have resulted in psycho-physiological processes influenced by perceptual inputs. Therefore, efforts to comprehend brain functions and human behavior could potentially catalyze the development of AI models with human-like attributes. In this study, we introduce a multimodal emotion dataset comprising data from 30-channel electroencephalography (EEG), audio, and video recordings from 42 participants. Each participant engaged in a cue-based conversation scenario, eliciting five distinct emotions: neutral, anger, happiness, sadness, and calmness. Throughout the experiment, each participant contributed 200 interactions, which encompassed both listening and speaking. This resulted in a cumulative total of 8,400 interactions across all participants. We evaluated the baseline performance of emotion recognition for each modality using established deep neural network (DNN) methods. The Emotion in EEG-Audio-Visual (EAV) dataset represents the first public dataset to incorporate three primary modalities for emotion recognition within a conversational context. We anticipate that this dataset will make significant contributions to the modeling of the human emotional process, encompassing both fundamental neuroscience and machine learning viewpoints. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF