Back to Search Start Over

Dual‐granularity feature fusion in visible‐infrared person re‐identification.

Authors :
Cai, Shuang
Yang, Shanmin
Hu, Jing
Wu, Xi
Source :
IET Image Processing (Wiley-Blackwell). 3/27/2024, Vol. 18 Issue 4, p972-980. 9p.
Publication Year :
2024

Abstract

Visible‐infrared person re‐identification (VI‐ReID) aims to recognize images of the same person captured in different modalities. Existing methods mainly focus on learning single‐granularity representations, which have limited discriminability and weak robustness. This paper proposes a novel dual‐granularity feature fusion network for VI‐ReID. Specifically, a dual‐branch module that extracts global and local features and then fuses them to enhance the representative ability is adopted. Furthermore, an identity‐aware modal discrepancy loss that promotes modality alignment by reducing the gap between features from visible and infrared modalities is proposed. Finally, considering the influence of non‐discriminative information in the modal‐shared features of RGB‐IR, a greyscale conversion is introduced to extract modality‐irrelevant discriminative features better. Extensive experiments on the SYSU‐MM01 and RegDB datasets demonstrate the effectiveness of the framework and superiority over state‐of‐the‐art methods. [ABSTRACT FROM AUTHOR]

Subjects

Subjects :
*COMPUTER vision
*IMAGE retrieval

Details

Language :
English
ISSN :
17519659
Volume :
18
Issue :
4
Database :
Academic Search Index
Journal :
IET Image Processing (Wiley-Blackwell)
Publication Type :
Academic Journal
Accession number :
175870018
Full Text :
https://doi.org/10.1049/ipr2.12999