Back to Search Start Over

X-GGM: Graph Generative Modeling for Out-of-Distribution Generalization in Visual Question Answering

Authors :
Jiang, Jingjing
Liu, Ziyi
Liu, Yifan
Nan, Zhixiong
Zheng, Nanning
Jiang, Jingjing
Liu, Ziyi
Liu, Yifan
Nan, Zhixiong
Zheng, Nanning
Publication Year :
2021

Abstract

Encouraging progress has been made towards Visual Question Answering (VQA) in recent years, but it is still challenging to enable VQA models to adaptively generalize to out-of-distribution (OOD) samples. Intuitively, recompositions of existing visual concepts (\ie, attributes and objects) can generate unseen compositions in the training set, which will promote VQA models to generalize to OOD samples. In this paper, we formulate OOD generalization in VQA as a compositional generalization problem and propose a graph generative modeling-based training scheme (X-GGM) to implicitly model the problem. X-GGM leverages graph generative modeling to iteratively generate a relation matrix and node representations for the predefined graph that utilizes attribute-object pairs as nodes. Furthermore, to alleviate the unstable training issue in graph generative modeling, we propose a gradient distribution consistency loss to constrain the data distribution with adversarial perturbations and the generated distribution. The baseline VQA model (LXMERT) trained with the X-GGM scheme achieves state-of-the-art OOD performance on two standard VQA OOD benchmarks, \ie, VQA-CP v2 and GQA-OOD. Extensive ablation studies demonstrate the effectiveness of X-GGM components. Code is available at \url{https://github.com/jingjing12110/x-ggm}.<br />Comment: Accepted by ACM MM2021

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1269566094
Document Type :
Electronic Resource