Back to Search Start Over

Bi-directional Spatial-Semantic Attention Networks for Image-Text Matching

Authors :
Zhonghua Zhao
Xiaoming Zhang
Zhoujun Li
Feiran Huang
Source :
IEEE transactions on image processing : a publication of the IEEE Signal Processing Society.
Publication Year :
2018

Abstract

Image-text matching by deep models has recently made remarkable achievements in many tasks, such as image caption and image search. A major challenge of matching the image and text lies in that they usually have complicated underlying relations between them and simply modeling the relations may lead to suboptimal performance. In this paper, we develop a novel approach Bi-directional Spatial-Semantic Attention Networks (BSSAN), which leverages both the word to regions (W2R) relation and image object to words (O2W) relation in a holistic deep framework for more effectively matching. Specifically, to effectively encode the W2R relation, we adopt LSTM with bilinear attention function to infer the image regions which are more related to the particular words, which is referred as the W2R attention network. On the other side, the O2W attention network is proposed to discover the semanticallyclose words for each visual object in the image, i.e., the visual object to words (O2W) relation. Then a deep model unifying both of the two directional attention networks into a holistic learning framework is proposed to learn the matching scores of image and text pairs. Compared to existing image-text matching methods, our approach achieves state-of-the-art performance on the datasets of Flickr30K and MSCOCO.

Details

ISSN :
19410042
Database :
OpenAIRE
Journal :
IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Accession number :
edsair.doi.dedup.....3b9d7e77da797d3cc3be515ec9de0255