Back to Search Start Over

Exploiting Modality-Specific Features For Multi-Modal Manipulation Detection And Grounding

Authors :
Wang, Jiazhen
Liu, Bin
Miao, Changtao
Zhao, Zhiwei
Zhuang, Wanyi
Chu, Qi
Yu, Nenghai
Publication Year :
2023

Abstract

AI-synthesized text and images have gained significant attention, particularly due to the widespread dissemination of multi-modal manipulations on the internet, which has resulted in numerous negative impacts on society. Existing methods for multi-modal manipulation detection and grounding primarily focus on fusing vision-language features to make predictions, while overlooking the importance of modality-specific features, leading to sub-optimal results. In this paper, we construct a simple and novel transformer-based framework for multi-modal manipulation detection and grounding tasks. Our framework simultaneously explores modality-specific features while preserving the capability for multi-modal alignment. To achieve this, we introduce visual/language pre-trained encoders and dual-branch cross-attention (DCA) to extract and fuse modality-unique features. Furthermore, we design decoupled fine-grained classifiers (DFC) to enhance modality-specific feature mining and mitigate modality competition. Moreover, we propose an implicit manipulation query (IMQ) that adaptively aggregates global contextual cues within each modality using learnable queries, thereby improving the discovery of forged details. Extensive experiments on the $\rm DGM^4$ dataset demonstrate the superior performance of our proposed model compared to state-of-the-art approaches.<br />Comment: This work has been submitted to the IEEE for possible publication. Camera-ready version and supplementary material

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2309.12657
Document Type :
Working Paper