Back to Search Start Over

Cross‐modal retrieval with dual multi‐angle self‐attention.

Authors :
Li, Wenjie
Zheng, Yi
Zhang, Yuejie
Feng, Rui
Zhang, Tao
Fan, Weiguo
Source :
Journal of the Association for Information Science & Technology. Jan2021, Vol. 72 Issue 1, p46-65. 20p. 5 Color Photographs, 5 Diagrams, 8 Charts, 1 Graph.
Publication Year :
2021

Abstract

In recent years, cross‐modal retrieval has been a popular research topic in both fields of computer vision and natural language processing. There is a huge semantic gap between different modalities on account of heterogeneous properties. How to establish the correlation among different modality data faces enormous challenges. In this work, we propose a novel end‐to‐end framework named Dual Multi‐Angle Self‐Attention (DMASA) for cross‐modal retrieval. Multiple self‐attention mechanisms are applied to extract fine‐grained features for both images and texts from different angles. We then integrate coarse‐grained and fine‐grained features into a multimodal embedding space, in which the similarity degrees between images and texts can be directly compared. Moreover, we propose a special multistage training strategy, in which the preceding stage can provide a good initial value for the succeeding stage and make our framework work better. Very promising experimental results over the state‐of‐the‐art methods can be achieved on three benchmark datasets of Flickr8k, Flickr30k, and MSCOCO. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
23301635
Volume :
72
Issue :
1
Database :
Academic Search Index
Journal :
Journal of the Association for Information Science & Technology
Publication Type :
Academic Journal
Accession number :
147599134
Full Text :
https://doi.org/10.1002/asi.24373