Back to Search Start Over

Double Multi-Head Attention Multimodal System for Odyssey 2024 Speech Emotion Recognition Challenge

Authors :
Costa, Federico
India, Miquel
Hernando, Javier
Source :
Proc. The Speaker and Language Recognition Workshop (Odyssey 2024), 266-273
Publication Year :
2024

Abstract

As computer-based applications are becoming more integrated into our daily lives, the importance of Speech Emotion Recognition (SER) has increased significantly. Promoting research with innovative approaches in SER, the Odyssey 2024 Speech Emotion Recognition Challenge was organized as part of the Odyssey 2024 Speaker and Language Recognition Workshop. In this paper we describe the Double Multi-Head Attention Multimodal System developed for this challenge. Pre-trained self-supervised models were used to extract informative acoustic and text features. An early fusion strategy was adopted, where a Multi-Head Attention layer transforms these mixed features into complementary contextualized representations. A second attention mechanism is then applied to pool these representations into an utterance-level vector. Our proposed system achieved the third position in the categorical task ranking with a 34.41% Macro-F1 score, where 31 teams participated in total.<br />Comment: Odyssey 2024: The Speaker and Language Recognition Workshop

Details

Database :
arXiv
Journal :
Proc. The Speaker and Language Recognition Workshop (Odyssey 2024), 266-273
Publication Type :
Report
Accession number :
edsarx.2406.10598
Document Type :
Working Paper
Full Text :
https://doi.org/10.21437/odyssey.2024-38