1. Multimodal Information Fusion and Data Generation for Evaluation of Second Language Emotional Expression
- Author
-
Jun Yang, Liyan Wang, Yong Qi, Haifeng Chen, and Jian Li
- Subjects
second language learning ,data generation ,multimodal emotion recognition ,multimodal emotion evaluation ,emotion features ,Technology ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Biology (General) ,QH301-705.5 ,Physics ,QC1-999 ,Chemistry ,QD1-999 - Abstract
This study aims to develop an emotion evaluation method for second language learners, utilizing multimodal information to comprehensively evaluate students’ emotional expressions. Addressing the limitations of existing emotion evaluation methods, which primarily focus on the acoustic features of speech (e.g., pronunciation, frequency, and rhythm) and often neglect the emotional expressions conveyed through voice and facial videos, this paper proposes an emotion evaluation method based on multimodal information. The method includes the following three main parts: (1) generating virtual data using a Large Language Model (LLM) and audio-driven facial video synthesis, as well as integrating the IEMOCAP dataset with self-recorded student videos and audios containing teacher ratings to construct a multimodal emotion evaluation dataset; (2) a graph convolution-based emotion feature encoding network to extract emotion features from multimodal information; and (3) an emotion evaluation network based on Kolmogorov–Arnold Networks (KAN) to compare students’ emotion features with standard synthetic data for precise evaluation. The emotion recognition method achieves an unweighted accuracy (UA) of 68.02% and an F1 score of 67.11% in experiments with the IEMOCAP dataset and TTS data. The emotion evaluation model, using the KAN network, outperforms the MLP network, with a mean squared error (MSE) of 0.811 compared to 0.943, providing a reliable tool for evaluating language learners’ emotional expressions.
- Published
- 2024
- Full Text
- View/download PDF