1. Speech Emotion Recognition Based on Multi-task Deep Feature Extraction and MKPCA Feature Fusion
- Author
-
Baoyun LI, Xueying ZHANG, Juan LI, Lixia HUANG, Guijun CHEN, and Ying SUN
- Subjects
speech emotion recognition ,multi-task learning ,acoustic depth features ,spectrogram features ,multi-kernel principal component analysis ,Chemical engineering ,TP155-156 ,Materials of engineering and construction. Mechanics of materials ,TA401-492 ,Technology - Abstract
Purposes Speech emotion recognition allows computers to understand the emotional information contained in human speech, and is an important part of intelligent human-computer interaction. Feature extraction and fusion are key parts in speech emotion recognition systems, and have an important impact on recognition results. Aiming at the problem of insufficient emotional information contained in traditional acoustic features, a deep feature extraction method based on multi-task learning for optimization of acoustic features is proposed in this paper. Methods The proposed acoustic depth feature can better characterize itself and has more emotional information. Then, on the basis of the complementarity between acoustic features and spectrogram features, spectrogram features through convolutional neural network are extracted. Then, the multi-kernel principal component analysis method is used to perform feature fusion and dimension reduction on these two features, and the obtained fusion features can effectively improve the system recognition performance. Findings Experiments are carried out on the EMODB and the CASIA speech databases. When the DNN classifier is used, the multi-kernel fusion feature of the acoustic depth feature and the spectrogram feature achieve the highest recognition rates of 92.71% and 88.25%, respectively. Compared with direct feature splicing, this method increased the recognition rate by 2.43% and 2.83%, respectively.
- Published
- 2023
- Full Text
- View/download PDF