Back to Search Start Over

Visual Commonsense based Heterogeneous Graph Contrastive Learning

Authors :
Li, Zongzhao
Zhu, Xiangyu
Zhang, Xi
Zhang, Zhaoxiang
Lei, Zhen
Publication Year :
2023

Abstract

How to select relevant key objects and reason about the complex relationships cross vision and linguistic domain are two key issues in many multi-modality applications such as visual question answering (VQA). In this work, we incorporate the visual commonsense information and propose a heterogeneous graph contrastive learning method to better finish the visual reasoning task. Our method is designed as a plug-and-play way, so that it can be quickly and easily combined with a wide range of representative methods. Specifically, our model contains two key components: the Commonsense-based Contrastive Learning and the Graph Relation Network. Using contrastive learning, we guide the model concentrate more on discriminative objects and relevant visual commonsense attributes. Besides, thanks to the introduction of the Graph Relation Network, the model reasons about the correlations between homogeneous edges and the similarities between heterogeneous edges, which makes information transmission more effective. Extensive experiments on four benchmarks show that our method greatly improves seven representative VQA models, demonstrating its effectiveness and generalizability.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2311.06553
Document Type :
Working Paper