51. View graph construction for scenes with duplicate structures via graph convolutional network
- Author
-
Yang Peng, Shen Yan, Yuxiang Liu, Yu Liu, and Maojun Zhang
- Subjects
graph convolutional network ,image embedding ,metric learing ,view graph construction ,Computer applications to medicine. Medical informatics ,R858-859.7 ,Computer software ,QA76.75-76.765 - Abstract
Abstract View graph construction aims to effectively organise disordered image dataset through image retrieval technique before structure from motion (SfM). Existing view graph construction methods usually fail to handle scenes with duplicate structure, because these methods solely treat the construction of view graph as a process of image‐pair‐wise matching and lack in exploiting images' topological details in dataset. In this paper, we handle this problem from a novel perspective to construct view graph in a global paradigm by introducing an end‐to‐end graph convolutional network (GCN). First, a location‐aware embedding module is introduced to encode images into a feature space that takes into account the feature's location by using Vision Transformer architecture, improving the distinction between features of duplicate structure. Second, graph convolutional network that consists of topological relationship preserving module and feature metric learning module is proposed. Topological relationship preserving network is proposed to help nodes maintain their connected neighbourhood features. By merging the topological connected information into images' embedding, our method can process image matching in a global mode, thus improving the disambiguation ability for images with duplicate scenes. Then a feature metric learning network is embedded into GCN to dynamically compute the linkage prediction among nodes based on their features. Finally, our method combines these three parts to jointly optimise nodes' features and linkage prediction in an end‐to‐end paradigm. We make qualitative and quantitative comparisons based on three public benchmark datasets and demonstrate that our proposed method performs favourably against other state‐of‐the‐art methods.
- Published
- 2022
- Full Text
- View/download PDF