Back to Search Start Over

Multi-Classifier Interactive Learning for Ambiguous Speech Emotion Recognition

Authors :
Zhou, Ying
Liang, Xuefeng
Gu, Yu
Yin, Yifei
Yao, Longshan
Publication Year :
2020

Abstract

In recent years, speech emotion recognition technology is of great significance in industrial applications such as call centers, social robots and health care. The combination of speech recognition and speech emotion recognition can improve the feedback efficiency and the quality of service. Thus, the speech emotion recognition has been attracted much attention in both industry and academic. Since emotions existing in an entire utterance may have varied probabilities, speech emotion is likely to be ambiguous, which poses great challenges to recognition tasks. However, previous studies commonly assigned a single-label or multi-label to each utterance in certain. Therefore, their algorithms result in low accuracies because of the inappropriate representation. Inspired by the optimally interacting theory, we address the ambiguous speech emotions by proposing a novel multi-classifier interactive learning (MCIL) method. In MCIL, multiple different classifiers first mimic several individuals, who have inconsistent cognitions of ambiguous emotions, and construct new ambiguous labels (the emotion probability distribution). Then, they are retrained with the new labels to interact with their cognitions. This procedure enables each classifier to learn better representations of ambiguous data from others, and further improves the recognition ability. The experiments on three benchmark corpora (MAS, IEMOCAP, and FAU-AIBO) demonstrate that MCIL does not only improve each classifier's performance, but also raises their recognition consistency from moderate to substantial.<br />Comment: 10 pages, 4 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2012.05429
Document Type :
Working Paper
Full Text :
https://doi.org/10.1109/TASLP.2022.3145287