Back to Search Start Over

Relatively-Paired Space Analysis: Learning a Latent Common Space From Relatively-Paired Observations

Authors :
Zhanghui Kuang
Kwan-Yee K. Wong
Source :
International Journal of Computer Vision. 113:176-192
Publication Year :
2014
Publisher :
Springer Science and Business Media LLC, 2014.

Abstract

Discovering a latent common space between different modalities plays an important role in cross-modality pattern recognition. Existing techniques often require absolutely-paired observations as training data, and are incapable of capturing more general semantic relationships between cross-modality observations. This greatly limits their applications. In this paper, we propose a general framework for learning a latent common space from relatively-paired observations (i.e., two observations from different modalities are more-likely-paired than another two). Relative-pairing information is encoded using relative proximities of observations in the latent common space. By building a discriminative model and maximizing a distance margin, a projection function that maps observations into the latent common space is learned for each modality. Cross-modality pattern recognition can then be carried out in the latent common space. To speed up the learning procedure for large scale training data, the problem is reformulated into learning a structural model, which is efficiently solved by the cutting plane algorithm. To evaluate the performance of the proposed framework, it has been applied to feature fusion, cross-pose face recognition, text-image retrieval and attribute-image retrieval. Experimental results demonstrate that the proposed framework outperforms other state-of-the-art approaches.

Details

ISSN :
15731405 and 09205691
Volume :
113
Database :
OpenAIRE
Journal :
International Journal of Computer Vision
Accession number :
edsair.doi...........0e59800b7ee35a70aee8add780584573