Back to Search Start Over

View-Invariant and Similarity Learning for Robust Person Re-Identification

Authors :
Jean-Paul Ainam
Ke Qin
Guisong Liu
Guangchun Luo
Source :
IEEE Access, Vol 7, Pp 185486-185495 (2019)
Publication Year :
2019
Publisher :
IEEE, 2019.

Abstract

Person re-identification aims to identify pedestrians across non-overlapping camera views. Deep learning methods have been successfully applied to solving the problem and have achieved impressive results. However, these methods rely either on feature extraction or metric learning alone ignoring the joint benefit and mutual complementary effects of the person view-specific representation. In this paper, we propose a multi-view deep network architecture coupled with n-pair loss (JNPL) to eliminate the complex view discrepancy and learn nonlinear mapping functions that are view-invariant. We show that the problem of the large variation in viewpoints of a pedestrian can be well solved using a multi-view network. We simultaneously exploit the complementary representation shared between views and propose an adaptive similarity loss function to better learn a similarity metric. In detail, we first extract view-invariant feature representation from n-pair of images using multi-stream CNN and then aggregate these features for predictions. Given n-positive pairs and a negative example, the network aggregate the feature map of the n-positive pairs and predicts the identity of the person and at the same time learns features that discriminate positive pairs against the negative sample. Extensive evaluations on three large scale datasets demonstrate the substantial advantages of our method over existing state-of-art methods.

Details

Language :
English
ISSN :
21693536
Volume :
7
Database :
Directory of Open Access Journals
Journal :
IEEE Access
Publication Type :
Academic Journal
Accession number :
edsdoj.9f1f2507f8d4459eadb911895bba51e5
Document Type :
article
Full Text :
https://doi.org/10.1109/ACCESS.2019.2960030