Back to Search Start Over

Vision-Language Pre-Training with Triple Contrastive Learning

Authors :
Yang, Jinyu
Duan, Jiali
Tran, Son
Xu, Yi
Chanda, Sampath
Chen, Liqun
Zeng, Belinda
Chilimbi, Trishul
Huang, Junzhou
Yang, Jinyu
Duan, Jiali
Tran, Son
Xu, Yi
Chanda, Sampath
Chen, Liqun
Zeng, Belinda
Chilimbi, Trishul
Huang, Junzhou
Publication Year :
2022

Abstract

Vision-language representation learning largely benefits from image-text alignment through contrastive losses (e.g., InfoNCE loss). The success of this alignment strategy is attributed to its capability in maximizing the mutual information (MI) between an image and its matched text. However, simply performing cross-modal alignment (CMA) ignores data potential within each modality, which may result in degraded representations. For instance, although CMA-based models are able to map image-text pairs close together in the embedding space, they fail to ensure that similar inputs from the same modality stay close by. This problem can get even worse when the pre-training data is noisy. In this paper, we propose triple contrastive learning (TCL) for vision-language pre-training by leveraging both cross-modal and intra-modal self-supervision. Besides CMA, TCL introduces an intra-modal contrastive objective to provide complementary benefits in representation learning. To take advantage of localized and structural information from image and text input, TCL further maximizes the average MI between local regions of image/text and their global summary. To the best of our knowledge, ours is the first work that takes into account local structure information for multi-modality representation learning. Experimental evaluations show that our approach is competitive and achieves the new state of the art on various common down-stream vision-language tasks such as image-text retrieval and visual question answering.<br />Comment: CVPR 2022; code: https://github.com/uta-smile/TCL

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1333751952
Document Type :
Electronic Resource