Back to Search Start Over

Graph Sampling-Based Multi-Stream Enhancement Network for Visible-Infrared Person Re-Identification

Authors :
Jinhua Jiang
Junjie Xiao
Renlin Wang
Tiansong Li
Wenfeng Zhang
Ruisheng Ran
Sen Xiang
Source :
Sensors, Vol 23, Iss 18, p 7948 (2023)
Publication Year :
2023
Publisher :
MDPI AG, 2023.

Abstract

With the increasing demand for person re-identification (Re-ID) tasks, the need for all-day retrieval has become an inevitable trend. Nevertheless, single-modal Re-ID is no longer sufficient to meet this requirement, making Multi-Modal Data crucial in Re-ID. Consequently, a Visible-Infrared Person Re-Identification (VI Re-ID) task is proposed, which aims to match pairs of person images from the visible and infrared modalities. The significant modality discrepancy between the modalities poses a major challenge. Existing VI Re-ID methods focus on cross-modal feature learning and modal transformation to alleviate the discrepancy but overlook the impact of person contour information. Contours exhibit modality invariance, which is vital for learning effective identity representations and cross-modal matching. In addition, due to the low intra-modal diversity in the visible modality, it is difficult to distinguish the boundaries between some hard samples. To address these issues, we propose the Graph Sampling-based Multi-stream Enhancement Network (GSMEN). Firstly, the Contour Expansion Module (CEM) incorporates the contour information of a person into the original samples, further reducing the modality discrepancy and leading to improved matching stability between image pairs of different modalities. Additionally, to better distinguish cross-modal hard sample pairs during the training process, an innovative Cross-modality Graph Sampler (CGS) is designed for sample selection before training. The CGS calculates the feature distance between samples from different modalities and groups similar samples into the same batch during the training process, effectively exploring the boundary relationships between hard classes in the cross-modal setting. Some experiments conducted on the SYSU-MM01 and RegDB datasets demonstrate the superiority of our proposed method. Specifically, in the VIS→IR task, the experimental results on the RegDB dataset achieve 93.69% for Rank-1 and 92.56% for mAP.

Details

Language :
English
ISSN :
23187948 and 14248220
Volume :
23
Issue :
18
Database :
Directory of Open Access Journals
Journal :
Sensors
Publication Type :
Academic Journal
Accession number :
edsdoj.9796b13e467a429f8bb4c1faf781978c
Document Type :
article
Full Text :
https://doi.org/10.3390/s23187948