Back to Search
Start Over
Feature fusion: research on emotion recognition in English speech.
- Source :
- International Journal of Speech Technology; Jun2024, Vol. 27 Issue 2, p319-327, 9p
- Publication Year :
- 2024
-
Abstract
- English speech incorporates numerous features associated with the speaker's emotions, offering valuable cues for emotion recognition. This paper begins by briefly outlining preprocessing approaches for English speech signals. Subsequently, the Mel-frequency cepstral coefficient (MFCC), energy, and short-time zero-crossing rate were chosen as features, and their statistical properties were computed. The resulting 250-dimensional feature fusion was employed as input. A novel approach that combined gated recurrent unit (GRU) and a convolutional neural network (CNN) was designed for emotion recognition. The bidirectional GRU (BiGRU) method was enhanced through jump-joining to create a CNN-Skip-BiGRU model as an emotion recognition method for English speech. Experimental evaluations were conducted using the IEMOCAP dataset. The findings indicated that the fusion features exhibited superior performance in emotion recognition, achieving an unweighted accuracy rate of 70.31% and a weighted accuracy rate of 70.88%. In contrast to models like CNN-long short-term memory (LSTM), the CNN-Skip-BiGRU model demonstrated enhanced discriminative capabilities for different emotions. Moreover, it stood favorably against several existing emotion recognition methods. These results underscore the efficacy of the improved method in English speech emotion identification, suggesting its potential practical applications. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 13812416
- Volume :
- 27
- Issue :
- 2
- Database :
- Complementary Index
- Journal :
- International Journal of Speech Technology
- Publication Type :
- Academic Journal
- Accession number :
- 178560946
- Full Text :
- https://doi.org/10.1007/s10772-024-10107-7