1. Learning dual attention enhancement feature for visible–infrared person re-identification.
- Author
-
Zhang, Guoqing, Zhang, Yinyin, Zhang, Hongwei, Chen, Yuhao, and Zheng, Yuhui
- Subjects
- *
INFRARED imaging , *MODAL logic , *EMBEDDINGS (Mathematics) , *IMAGING systems , *INFRARED technology - Abstract
Most previous visible–infrared person re-identification methods emphasized learning modality-shared features to narrow the modality differences, while neglecting the benefits of modality-specific features for feature embedding and narrowing the modality gap. To tackle this issue, our paper designs a method based on dual attention enhancement features to use shallow and deep features simultaneously. We first convert visible images into gray images to alleviate the visual difference. Then, to close the difference between modalities by learning the modality-specific features, we design a shallow feature measurement module, in which we use a class-specific maximum mean discrepancy loss to measure the distribution difference of specific features between two modalities. Finally, we design a dual attention feature enhancement module, which aims to mine more useful context information from modality-shared features to shorter the distance between classes within modalities. Specifically, our model exceeds the current SOTAs on SYSU-MM01, with 66.61% Rank-1 accuracy and 62.86% mAP. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF