Back to Search Start Over

Knowledge-Guided Dynamic Modality Attention Fusion Framework for Multimodal Sentiment Analysis

Authors :
Feng, Xinyu
Lin, Yuming
He, Lihua
Li, You
Chang, Liang
Zhou, Ya
Publication Year :
2024

Abstract

Multimodal Sentiment Analysis (MSA) utilizes multimodal data to infer the users' sentiment. Previous methods focus on equally treating the contribution of each modality or statically using text as the dominant modality to conduct interaction, which neglects the situation where each modality may become dominant. In this paper, we propose a Knowledge-Guided Dynamic Modality Attention Fusion Framework (KuDA) for multimodal sentiment analysis. KuDA uses sentiment knowledge to guide the model dynamically selecting the dominant modality and adjusting the contributions of each modality. In addition, with the obtained multimodal representation, the model can further highlight the contribution of dominant modality through the correlation evaluation loss. Extensive experiments on four MSA benchmark datasets indicate that KuDA achieves state-of-the-art performance and is able to adapt to different scenarios of dominant modality.<br />Comment: Accepted to EMNLP Findings 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.04491
Document Type :
Working Paper