Back to Search Start Over

Exploring the Contextual Factors Affecting Multimodal Emotion Recognition in Videos

Authors :
Raj Kumar Gupta
Yinping Yang
Prasanta Bhattacharya
Source :
IEEE Transactions on Affective Computing. 14:1547-1557
Publication Year :
2023
Publisher :
Institute of Electrical and Electronics Engineers (IEEE), 2023.

Abstract

Emotional expressions form a key part of user behavior on today's digital platforms. While multimodal emotion recognition techniques are gaining research attention, there is a lack of deeper understanding on how visual and non-visual features can be used to better recognize emotions in certain contexts, but not others. This study analyzes the interplay between the effects of multimodal emotion features derived from facial expressions, tone and text in conjunction with two key contextual factors: i) gender of the speaker, and ii) duration of the emotional episode. Using a large public dataset of 2,176 manually annotated YouTube videos, we found that while multimodal features consistently outperformed bimodal and unimodal features, their performance varied significantly across different emotions, gender and duration contexts. Multimodal features performed particularly better for male speakers in recognizing most emotions. Furthermore, multimodal features performed particularly better for shorter than for longer videos in recognizing neutral and happiness, but not sadness and anger. These findings offer new insights towards the development of more context-aware emotion recognition and empathetic systems.<br />Comment: Accepted version at IEEE Transactions on Affective Computing

Details

ISSN :
23719850
Volume :
14
Database :
OpenAIRE
Journal :
IEEE Transactions on Affective Computing
Accession number :
edsair.doi.dedup.....4eb604a0a0070a0be04a12de56a29fd2