Back to Search Start Over

Exploiting Pseudo Image Captions for Multimodal Summarization

Authors :
Jiang, Chaoya
Xie, Rui
Ye, Wei
Sun, Jinan
Zhang, Shikun
Publication Year :
2023

Abstract

Cross-modal contrastive learning in vision language pretraining (VLP) faces the challenge of (partial) false negatives. In this paper, we study this problem from the perspective of Mutual Information (MI) optimization. It is common sense that InfoNCE loss used in contrastive learning will maximize the lower bound of MI between anchors and their positives, while we theoretically prove that MI involving negatives also matters when noises commonly exist. Guided by a more general lower bound form for optimization, we propose a contrastive learning strategy regulated by progressively refined cross-modal similarity, to more accurately optimize MI between an image/text anchor and its negative texts/images instead of improperly minimizing it. Our method performs competitively on four downstream cross-modal tasks and systematically balances the beneficial and harmful effects of (partial) false negative samples under theoretical guidance.<br />Comment: Accepted at ACL2023 Findings

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2305.05496
Document Type :
Working Paper