Back to Search Start Over

Cross-Modal Image Retrieval Considering Semantic Relationships With Many-to-Many Correspondence Loss

Authors :
Zhang, Huaying
Yanagi, Rintaro
Togo, Ren
1000020524028
Ogawa, Takahiro
1000000218463
Haseyama, Miki
Zhang, Huaying
Yanagi, Rintaro
Togo, Ren
1000020524028
Ogawa, Takahiro
1000000218463
Haseyama, Miki
Publication Year :
2023

Abstract

A cross-modal image retrieval that explicitly considers semantic relationships between images and texts is proposed. Most conventional cross-modal image retrieval methods retrieve the target images by directly measuring the similarities between the candidate images and query texts in a common semantic embedding space. However, such methods tend to focus on a one-to-one correspondence between a predefined image-text pair during the training phase, and other semantically similar images and texts are ignored. By considering the many-to-many correspondences between semantically similar images and texts, a common embedding space is constructed to assure semantic relationships, which allows users to accurately find more images that are related to the input query texts. Thus, in this paper, we propose a cross-modal image retrieval method that considers semantic relationships between images and texts. The proposed method calculates the similarities between texts as semantic similarities to acquire the relationships. Then, we introduce a loss function that explicitly constructs the many-to-many correspondences between semantically similar images and texts from their semantic relationships. We also propose an evaluation metric to assess whether each method can construct an embedding space considering the semantic relationships. Experimental results demonstrate that the proposed method outperforms conventional methods in terms of this newly proposed metric.

Details

Database :
OAIster
Notes :
English
Publication Type :
Electronic Resource
Accession number :
edsoai.on1375175883
Document Type :
Electronic Resource