Back to Search Start Over

A Multimodal Fusion Network For Student Emotion Recognition Based on Transformer and Tensor Product

Authors :
Xiang, Ao
Qi, Zongqing
Wang, Han
Yang, Qin
Ma, Danqing
Publication Year :
2024

Abstract

This paper introduces a new multi-modal model based on the Transformer architecture and tensor product fusion strategy, combining BERT's text vectors and ViT's image vectors to classify students' psychological conditions, with an accuracy of 93.65%. The purpose of the study is to accurately analyze the mental health status of students from various data sources. This paper discusses modal fusion methods, including early, late and intermediate fusion, to overcome the challenges of integrating multi-modal information. Ablation studies compare the performance of different models and fusion techniques, showing that the proposed model outperforms existing methods such as CLIP and ViLBERT in terms of accuracy and inference speed. Conclusions indicate that while this model has significant advantages in emotion recognition, its potential to incorporate other data modalities provides areas for future research.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2403.08511
Document Type :
Working Paper