Back to Search Start Over

Facial Emotion Recognition with Inter-Modality-Attention-Transformer-Based Self-Supervised Learning.

Authors :
Chaudhari, Aayushi
Bhatt, Chintan
Krishna, Achyut
Travieso-González, Carlos M.
Source :
Electronics (2079-9292); Jan2023, Vol. 12 Issue 2, p288, 15p
Publication Year :
2023

Abstract

Emotion recognition is a very challenging research field due to its complexity, as individual differences in cognitive–emotional cues involve a wide variety of ways, including language, expressions, and speech. If we use video as the input, we can acquire a plethora of data for analyzing human emotions. In this research, we use features derived from separately pretrained self-supervised learning models to combine text, audio (speech), and visual data modalities. The fusion of features and representation is the biggest challenge in multimodal emotion classification research. Because of the large dimensionality of self-supervised learning characteristics, we present a unique transformer and attention-based fusion method for incorporating multimodal self-supervised learning features that achieved an accuracy of 86.40% for multimodal emotion classification. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
20799292
Volume :
12
Issue :
2
Database :
Complementary Index
Journal :
Electronics (2079-9292)
Publication Type :
Academic Journal
Accession number :
161437589
Full Text :
https://doi.org/10.3390/electronics12020288