Back to Search
Start Over
Multimodal Sentiment Analysis Based on Composite Hierarchical Fusion.
- Source :
-
Computer Journal . Jun2024, Vol. 67 Issue 6, p2230-2245. 16p. - Publication Year :
- 2024
-
Abstract
- In the field of multimodal sentiment analysis, it is an important research task to fully extract modal features and perform efficient fusion. In response to the problems of insufficient semantic information and poor cross-modal fusion effect of traditional sentiment classification models, this paper proposes a composite hierarchical feature fusion method combined with prior knowledge. Firstly, the ALBERT (A Lite BERT) model and the improved ResNet model are constructed for feature extraction of text and image, respectively, and high-dimensional feature vectors are obtained. Secondly, to solve the problem of insufficient semantic information expression in cross-scene, a prior knowledge enhancement model is proposed to enrich the data characteristics of each modality. Finally, to solve the problem of poor cross-modal fusion effect, a composite hierarchical fusion model is proposed, which combines the temporal convolutional network and the attention mechanism to fuse the sequence features of each modality information and realizes the information interaction between different modalities. Experiments on MVSA-Single and MVSA-Multi datasets show that the proposed model is superior to a series of comparison models and has good adaptability in new scenarios. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 00104620
- Volume :
- 67
- Issue :
- 6
- Database :
- Academic Search Index
- Journal :
- Computer Journal
- Publication Type :
- Academic Journal
- Accession number :
- 178338267
- Full Text :
- https://doi.org/10.1093/comjnl/bxae002