Back to Search Start Over

Cross-Modality Relevance for Reasoning on Language and Vision

Authors :
Zheng, Chen
Guo, Quan
Kordjamshidi, Parisa
Publication Year :
2020

Abstract

This work deals with the challenge of learning and reasoning over language and vision data for the related downstream tasks such as visual question answering (VQA) and natural language for visual reasoning (NLVR). We design a novel cross-modality relevance module that is used in an end-to-end framework to learn the relevance representation between components of various input modalities under the supervision of a target task, which is more generalizable to unobserved data compared to merely reshaping the original representation space. In addition to modeling the relevance between the textual entities and visual entities, we model the higher-order relevance between entity relations in the text and object relations in the image. Our proposed approach shows competitive performance on two different language and vision tasks using public benchmarks and improves the state-of-the-art published results. The learned alignments of input spaces and their relevance representations by NLVR task boost the training efficiency of VQA task.<br />Comment: Accepted by ACL 2020

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2005.06035
Document Type :
Working Paper