Back to Search
Start Over
Text-Image Cross-modal Retrieval Based on Transformer
- Source :
- Jisuanji kexue, Vol 50, Iss 4, Pp 141-148 (2023)
- Publication Year :
- 2023
- Publisher :
- Editorial office of Computer Science, 2023.
-
Abstract
- With the growth of Internet multimedia data,text image retrieval has become a research hotspot.In image and text retrieval,the mutual attention mechanism is used to achieve better image-text matching results by interacting image and text features.However,this method cannot obtain image features and text features separately,and requires interaction of image and text features in the later stage of large-scale retrieval,which consumes a lot of time and is not able to achieve fast retrieval and ma-tching.However,the cross-modal image text feature learning based on Transformer has achieved good results and has received more and more attention from researchers.This paper designs a novel Transformer-based text image retrieval network structure(HAS-Net),which mainly has the following improvements:a hierarchical Transformer coding structure is designed to better utilize the underlying grammatical information and high-level semantic information;the traditional global feature aggregation method is improved,and the self-attention mechanism is used to design a new feature aggregation method;by sharing the Transformer coding layer,image features and text features are mapped to a common feature coding space.Finally,experiments are conducted on the MS-COCO and Flickr30k datasets,the cross-modal retrieval performance has been improved,and it is in a leading position among similar algorithms.It is proved that the designed network structure is effective.
Details
- Language :
- Chinese
- ISSN :
- 1002137X
- Volume :
- 50
- Issue :
- 4
- Database :
- Directory of Open Access Journals
- Journal :
- Jisuanji kexue
- Publication Type :
- Academic Journal
- Accession number :
- edsdoj.6766dc335c624b1a8b3ab09bed0713b5
- Document Type :
- article
- Full Text :
- https://doi.org/10.11896/jsjkx.220100083