Back to Search
Start Over
Emoji multimodal microblog sentiment analysis based on mutual attention mechanism.
- Source :
-
Scientific reports [Sci Rep] 2024 Nov 26; Vol. 14 (1), pp. 29314. Date of Electronic Publication: 2024 Nov 26. - Publication Year :
- 2024
-
Abstract
- Emojis, utilizing visual means, mimic human facial expressions and postures to convey emotions and opinions. They are widely used in social media platforms such as Sina Weibo, and have become a crucial feature for sentiment analysis. However, existing approaches often treat emojis as special symbols or convert them into text labels, thereby neglecting the rich visual information of emojis. We propose a novel multimodal information integration model for emoji microblog sentiment analysis. To effectively leverage the emoji visual information, the model employs a text-emoji visual mutual attention mechanism. Experiments on a manually annotated microblog dataset show that compared to the baseline models without incorporating emoji visual information, the proposed model achieves improvements of 1.37% in macro F1 score and 2.30% in accuracy, respectively. To facilitate the related research, our corpus will be publicly available at https://github.com/yx100/Emojis/blob/main/weibo-emojis-annotation .<br />Competing Interests: Declarations. Competing interests: The authors declare no competing interests.<br /> (© 2024. The Author(s).)
Details
- Language :
- English
- ISSN :
- 2045-2322
- Volume :
- 14
- Issue :
- 1
- Database :
- MEDLINE
- Journal :
- Scientific reports
- Publication Type :
- Academic Journal
- Accession number :
- 39592651
- Full Text :
- https://doi.org/10.1038/s41598-024-80167-x