Back to Search
Start Over
Enhancing two-view correspondence learning by local-global self-attention.
- Source :
-
Neurocomputing . Oct2021, Vol. 459, p176-187. 12p. - Publication Year :
- 2021
-
Abstract
- Seeking reliable correspondences is a fundamental and significant work in computer vision. Recent work has demonstrated that the task can be effectively accomplished by utilizing a deep learning network based on multi-layer perceptrons, which uses the context normalization to deal with the input. However, the context normalization treats each correspondence equally, which will reduce the representation capability of potential inliers. To solve this problem, we propose a novel and effective Local-Global Self-Attention (LAGA) layer based on the self-attention mechanism, to capture contextual information of potential inliers from coarse to fine, and suppress outliers at the same time in processing the input. The global self-attention module is able to capture abundant global contextual information in the whole image, and the local self-attention module is used to obtain rich local contextual information in the local region. After that, to obtain richer contextual information and feature maps with stronger representative capacity, we combine global and local contextual information. The extensive experiments have shown that the networks with our proposed LAGA layer perform better than the original and other comparative networks in outdoor and indoor scenes for outlier removal and camera pose estimation tasks. [ABSTRACT FROM AUTHOR]
- Subjects :
- *DEEP learning
*MULTILAYER perceptrons
*COMPUTER vision
*PROBLEM solving
Subjects
Details
- Language :
- English
- ISSN :
- 09252312
- Volume :
- 459
- Database :
- Academic Search Index
- Journal :
- Neurocomputing
- Publication Type :
- Academic Journal
- Accession number :
- 152347560
- Full Text :
- https://doi.org/10.1016/j.neucom.2021.06.084