Back to Search Start Over

Augmenting Multimodal Content Representation with Transformers for Misinformation Detection

Authors :
Jenq-Haur Wang
Mehdi Norouzi
Shu Ming Tsai
Source :
Big Data and Cognitive Computing, Vol 8, Iss 10, p 134 (2024)
Publication Year :
2024
Publisher :
MDPI AG, 2024.

Abstract

Information sharing on social media has become a common practice for people around the world. Since it is difficult to check user-generated content on social media, huge amounts of rumors and misinformation are being spread with authentic information. On the one hand, most of the social platforms identify rumors through manual fact-checking, which is very inefficient. On the other hand, with an emerging form of misinformation that contains inconsistent image–text pairs, it would be beneficial if we could compare the meaning of multimodal content within the same post for detecting image–text inconsistency. In this paper, we propose a novel approach to misinformation detection by multimodal feature fusion with transformers and credibility assessment with self-attention-based Bi-RNN networks. Firstly, captions are derived from images using an image captioning module to obtain their semantic descriptions. These are compared with surrounding text by fine-tuning transformers for consistency check in semantics. Then, to further aggregate sentiment features into text representation, we fine-tune a separate transformer for text sentiment classification, where the output is concatenated to augment text embeddings. Finally, Multi-Cell Bi-GRUs with self-attention are used to train the credibility assessment model for misinformation detection. From the experimental results on tweets, the best performance with an accuracy of 0.904 and an F1-score of 0.921 can be obtained when applying feature fusion of augmented embeddings with sentiment classification results. This shows the potential of the innovative way of applying transformers in our proposed approach to misinformation detection. Further investigation is needed to validate the performance on various types of multimodal discrepancies.

Details

Language :
English
ISSN :
25042289
Volume :
8
Issue :
10
Database :
Directory of Open Access Journals
Journal :
Big Data and Cognitive Computing
Publication Type :
Academic Journal
Accession number :
edsdoj.2980bd89015411e9b39a05b5a23bcb5
Document Type :
article
Full Text :
https://doi.org/10.3390/bdcc8100134