Back to Search
Start Over
Remote Sensing Image-Text Retrieval With Implicit-Explicit Relation Reasoning
- Source :
- IEEE Transactions on Geoscience and Remote Sensing; 2024, Vol. 62 Issue: 1 p1-11, 11p
- Publication Year :
- 2024
-
Abstract
- Remote sensing image-text retrieval (RSITR) has become a research hotspot in recent years for its wide application. Existing methods in this context, based either on local or global feature matching, overlook the sensing variation-leaded visual deviation and geographically nearby image-text mismatching problems of remote sensing (RS) images. This work notes that this would limit the retrieval accuracy for RSITR. To handle this, we present IERR, an implicit-explicit relation reasoning framework that learns relations between local visual-textual tokens and enhances global image-text matching without requiring additional prior supervision. Specifically, masked image modeling (MIM) and masked language modeling (MLM) are used for symmetric mask reasoning consistency alignment. Meanwhile, masked features (i.e., implicit relation) and unmasked features (i.e., explicit relation) are fed into a multimodal interaction encoder to enhance the representations of the textual-visual features. Extensive experimental results on the RSICD and RSITMD datasets demonstrate the superiority of IERR compared with 17 baselines.
Details
- Language :
- English
- ISSN :
- 01962892 and 15580644
- Volume :
- 62
- Issue :
- 1
- Database :
- Supplemental Index
- Journal :
- IEEE Transactions on Geoscience and Remote Sensing
- Publication Type :
- Periodical
- Accession number :
- ejs67615277
- Full Text :
- https://doi.org/10.1109/TGRS.2024.3466909