Back to Search Start Over

Text-based person search via cross-modal alignment learning.

Authors :
Ke, Xiao
Liu, Hao
Xu, Peirong
Lin, Xinru
Guo, Wenzhong
Source :
Pattern Recognition. Aug2024, Vol. 152, pN.PAG-N.PAG. 1p.
Publication Year :
2024

Abstract

Text-based person search aims to use text descriptions to search for corresponding person images. However, due to the obvious pattern differences in image and text modalities, it is still a challenging problem to align the two modalities. Most existing approaches only consider semantic alignment within a global context or partial parts, lacking consideration of how to match image and text in terms of differences in model information. Therefore, in this paper, we propose an efficient Modality-Aligned Person Search network (MAPS) to address this problem. First, we suppress image-specific information by image feature style normalization to achieve modality knowledge alignment and reduce information differences between text and image. Second, we design a multi-granularity modal feature fusion and optimization method to enrich the modal features. To address the problem of useless and redundant information in the multi-granularity fused features, we propose a Multi-granularity Feature Self-optimization Module (MFSM) to adaptively adjust the corresponding contributions of different granularities in the fused features of the two modalities. Finally, to address the problem of information inconsistency in the training and inference stages, we propose a Cross-instance Feature Alignment (CFA) to help the network enhance category-level generalization ability and improve retrieval performance. Extensive experiments demonstrate that our MAPS achieves state-of-the-art performance on all text-based person search datasets, and significantly outperforms other existing methods. • A novel text-based person search network is proposed by reducing modal differences while learning sufficient modal features. • A multi-granularity feature self-optimization module is designed to optimize the multiscale image modal feature and multi-level semantic text modal feature, so as to learn more discriminative features with suppressing useless and redundant information. • A cross-instance feature alignment is proposed to construct image–text feature pairs with category-level information participating in training. • Extensive experiments in both CUHK-PEDES and ICFG-PEDES datasets show our MAPS obtains the state-of-the-art performance, which significantly outperforms other existing methods. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00313203
Volume :
152
Database :
Academic Search Index
Journal :
Pattern Recognition
Publication Type :
Academic Journal
Accession number :
176784493
Full Text :
https://doi.org/10.1016/j.patcog.2024.110481