Back to Search Start Over

Joint Representation Learning and Keypoint Detection for Cross-View Geo-Localization.

Authors :
Lin, Jinliang
Zheng, Zhedong
Zhong, Zhun
Luo, Zhiming
Li, Shaozi
Yang, Yi
Sebe, Nicu
Source :
IEEE Transactions on Image Processing; 2022, Vol. 31, p3780-3792, 13p
Publication Year :
2022

Abstract

In this paper, we study the cross-view geo-localization problem to match images from different viewpoints. The key motivation underpinning this task is to learn a discriminative viewpoint-invariant visual representation. Inspired by the human visual system for mining local patterns, we propose a new framework called RK-Net to jointly learn the discriminative Representation and detect salient Keypoints with a single Network. Specifically, we introduce a Unit Subtraction Attention Module (USAM) that can automatically discover representative keypoints from feature maps and draw attention to the salient regions. USAM contains very few learning parameters but yields significant performance improvement and can be easily plugged into different networks. We demonstrate through extensive experiments that (1) by incorporating USAM, RK-Net facilitates end-to-end joint learning without the prerequisite of extra annotations. Representation learning and keypoint detection are two highly-related tasks. Representation learning aids keypoint detection. Keypoint detection, in turn, enriches the model capability against large appearance changes caused by viewpoint variants. (2) USAM is easy to implement and can be integrated with existing methods, further improving the state-of-the-art performance. We achieve competitive geo-localization accuracy on three challenging datasets, i. e., University-1652, CVUSA and CVACT. Our code is available at https://github.com/AggMan96/RK-Net. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10577149
Volume :
31
Database :
Complementary Index
Journal :
IEEE Transactions on Image Processing
Publication Type :
Academic Journal
Accession number :
170077224
Full Text :
https://doi.org/10.1109/TIP.2022.3175601