Back to Search
Start Over
Cross-Modality Learning by Exploring Modality Interactions for Emotion Reasoning
- Source :
- IEEE Access, Vol 11, Pp 56634-56648 (2023)
- Publication Year :
- 2023
- Publisher :
- IEEE, 2023.
-
Abstract
- Even without hearing or seeing individuals, humans are able to determine subtle emotions from a range of indicators and surroundings. However, existing research on emotion recognition is mostly focused on recognizing the emotions of speakers across complete modalities. In real-world situations, emotion reasoning is an interesting field for inferring human emotions from a person’s surroundings when neither the face nor voice can be observed. Therefore, in this paper, we propose a novel multimodal approach for predicting emotion from missing one or more modalities based on attention mechanisms. Specifically, we employ self-attention for each unimodal representation to extract the dominant features and utilize the compounded paired-modality attention (CPMA) among sets of modalities to identify the context of the considered individual, such as the interplay of modalities, and capture people’s interactions in the video. The proposed model is trained on the Multimodal Emotion Reasoning (MEmoR) dataset, which includes multimedia inputs such as visual, audio, text, and personality. The proposed model achieves a weighted F1-score of 50.63% for the primary emotion group and 42.7% for the fine-grained one. According to the results, our proposed model outperforms the conventional approaches in terms of emotion reasoning.
Details
- Language :
- English
- ISSN :
- 21693536
- Volume :
- 11
- Database :
- Directory of Open Access Journals
- Journal :
- IEEE Access
- Publication Type :
- Academic Journal
- Accession number :
- edsdoj.317354bb8d244e2b9676d3e034a4ae3e
- Document Type :
- article
- Full Text :
- https://doi.org/10.1109/ACCESS.2023.3283597