Back to Search Start Over

Multimodal grid features and cell pointers for scene text visual question answering

Authors :
Dimosthenis Karatzas
Ernest Valveny
Marçal Rusiñol
Ali Furkan Biten
Lluis Gomez
Andres Mafla
Rubèn Tito
Source :
Pattern Recognition Letters. 150:242-249
Publication Year :
2021
Publisher :
Elsevier BV, 2021.

Abstract

This paper presents a new model for the task of scene text visual question answering, in which questions about a given image can only be answered by reading and understanding scene text that is present in it. The proposed model is based on an attention mechanism that attends to multi-modal features conditioned to the question, allowing it to reason jointly about the textual and visual modalities in the scene. The output weights of this attention module over the grid of multi-modal spatial features are interpreted as the probability that a certain spatial location of the image contains the answer text the to the given question. Our experiments demonstrate competitive performance in two standard datasets. Furthermore, this paper provides a novel analysis of the ST-VQA dataset based on a human performance study.<br />Comment: This paper is under consideration at Pattern Recognition Letters

Details

ISSN :
01678655
Volume :
150
Database :
OpenAIRE
Journal :
Pattern Recognition Letters
Accession number :
edsair.doi.dedup.....d8ed10e7bce429acee75bf6dabb64e94
Full Text :
https://doi.org/10.1016/j.patrec.2021.06.026