Back to Search Start Over

Holistic Multi-Modal Memory Network for Movie Question Answering.

Authors :
Wang, Anran
Luu, Anh Tuan
Foo, Chuan-Sheng
Zhu, Hongyuan
Tay, Yi
Chandrasekhar, Vijay
Source :
IEEE Transactions on Image Processing; 2020, Vol. 29, p489-499, 11p
Publication Year :
2020

Abstract

Answering questions using multi-modal context is a challenging problem, as it requires a deep integration of diverse data sources. Existing approaches only consider a subset of all possible interactions among data sources during one attention hop. In this paper, we present a holistic multi-modal memory network (HMMN) framework that fully considers interactions between different input sources (multi-modal context and question) at each hop. In addition, to hone in on relevant information, our framework takes answer choices into consideration during the context retrieval stage. Our HMMN framework effectively integrates information from the multi-modal context, question, and answer choices, enabling more informative context to be retrieved for question answering. Experimental results on the MovieQA and TVQA datasets validate the effectiveness of our HMMN framework. Extensive ablation studies show the importance of holistic reasoning and reveal the contributions of different attention strategies to model performance. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10577149
Volume :
29
Database :
Complementary Index
Journal :
IEEE Transactions on Image Processing
Publication Type :
Academic Journal
Accession number :
170077987
Full Text :
https://doi.org/10.1109/TIP.2019.2931534