Back to Search Start Over

Multi-View Incongruity Learning for Multimodal Sarcasm Detection

Authors :
Guo, Diandian
Cao, Cong
Yuan, Fangfang
Liu, Yanbing
Zeng, Guangjie
Yu, Xiaoyan
Peng, Hao
Yu, Philip S.
Publication Year :
2024

Abstract

Multimodal sarcasm detection (MSD) is essential for various downstream tasks. Existing MSD methods tend to rely on spurious correlations. These methods often mistakenly prioritize non-essential features yet still make correct predictions, demonstrating poor generalizability beyond training environments. Regarding this phenomenon, this paper undertakes several initiatives. Firstly, we identify two primary causes that lead to the reliance of spurious correlations. Secondly, we address these challenges by proposing a novel method that integrate Multimodal Incongruities via Contrastive Learning (MICL) for multimodal sarcasm detection. Specifically, we first leverage incongruity to drive multi-view learning from three views: token-patch, entity-object, and sentiment. Then, we introduce extensive data augmentation to mitigate the biased learning of the textual modality. Additionally, we construct a test set, SPMSD, which consists potential spurious correlations to evaluate the the model's generalizability. Experimental results demonstrate the superiority of MICL on benchmark datasets, along with the analyses showcasing MICL's advancement in mitigating the effect of spurious correlation.<br />Comment: Accepted to COLING 2025

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2412.00756
Document Type :
Working Paper