Back to Search
Start Over
A Novel Long Short-Term Memory Network Model For Multimodal Music Emotion Analysis In Affective Computing
- Source :
- Journal of Applied Science and Engineering, Vol 26, Iss 3, Pp 367-376 (2022)
- Publication Year :
- 2022
- Publisher :
- Tamkang University Press, 2022.
-
Abstract
- The emotion recognition of medium audio/video in affective computing has important application value for deep cognition in human-computer interaction (HCI)/brain-computer interaction (BCI) and other fields. Especially in the modern distance education, music emotion analysis can be used as one of the important techniques for real-time evaluation of teaching process. In complex dance scenes, the accuracy of music emotion analysis with traditional methods is not high. Therefore, this paper proposes a novel long short-term memory (LSTM) network model for multimodal music emotion analysis in affective computing. Dual-channel LSTM is used to simulate human auditory and visual processing pathways respectively to process the emotional information of music and facial expressions. Then, we train and test the model on an open bi-modal music dataset. Based on the LSTM model, the analytic hierarchy process (AHP) is introduced to fuse weighted feature at decision level. Finally, experiments show that the proposed method can effectively improve the recognition rate, and save a lot of training time.
Details
- Language :
- English
- ISSN :
- 27089967 and 27089975
- Volume :
- 26
- Issue :
- 3
- Database :
- Directory of Open Access Journals
- Journal :
- Journal of Applied Science and Engineering
- Publication Type :
- Academic Journal
- Accession number :
- edsdoj.5eefcb8c1d594fc6bae70bd3c2afd1f0
- Document Type :
- article
- Full Text :
- https://doi.org/10.6180/jase.202303_26(3).0008