Back to Search Start Over

Deep transfer metric learning

Authors :
Junlin Hu
Yap-Peng Tan
Jiwen Lu
School of Electrical and Electronic Engineering
2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Source :
CVPR
Publication Year :
2015
Publisher :
IEEE, 2015.

Abstract

Conventional metric learning methods usually assume that the training and test samples are captured in similar scenarios so that their distributions are assumed to be the same. This assumption doesn't hold in many real visual recognition applications, especially when samples are captured across different datasets. In this paper, we propose a new deep transfer metric learning (DTML) method to learn a set of hierarchical nonlinear transformations for cross-domain visual recognition by transferring discriminative knowledge from the labeled source domain to the unlabeled target domain. Specifically, our DTML learns a deep metric network by maximizing the inter-class variations and minimizing the intra-class variations, and minimizing the distribution divergence between the source domain and the target domain at the top layer of the network. To better exploit the discriminative information from the source domain, we further develop a deeply supervised transfer metric learning (DSTML) method by including an additional objective on DTML where the output of both the hidden layers and the top layer are optimized jointly. Experimental results on cross-dataset face verification and person re-identification validate the effectiveness of the proposed methods. Accepted version

Details

Database :
OpenAIRE
Journal :
2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Accession number :
edsair.doi.dedup.....9bce8f75b78472a6f7b4490e9a3a71a6