Back to Search Start Over

Reasoning with Multi-Structure Commonsense Knowledge in Visual Dialog

Authors :
Zhang, Shunyu
Jiang, Xiaoze
Yang, Zequn
Wan, Tao
Qin, Zengchang
Publication Year :
2022

Abstract

Visual Dialog requires an agent to engage in a conversation with humans grounded in an image. Many studies on Visual Dialog focus on the understanding of the dialog history or the content of an image, while a considerable amount of commonsense-required questions are ignored. Handling these scenarios depends on logical reasoning that requires commonsense priors. How to capture relevant commonsense knowledge complementary to the history and the image remains a key challenge. In this paper, we propose a novel model by Reasoning with Multi-structure Commonsense Knowledge (RMK). In our model, the external knowledge is represented with sentence-level facts and graph-level facts, to properly suit the scenario of the composite of dialog history and image. On top of these multi-structure representations, our model can capture relevant knowledge and incorporate them into the vision and semantic features, via graph-based interaction and transformer-based fusion. Experimental results and analysis on VisDial v1.0 and VisDialCK datasets show that our proposed model effectively outperforms comparative methods.<br />Comment: MULA Workshop, CVPR 2022

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2204.04680
Document Type :
Working Paper