Back to Search Start Over

CIGNet: Category-and-Intrinsic-Geometry Guided Network for 3D coarse-to-fine reconstruction.

Authors :
Gao, Junna
Kong, Dehui
Wang, Shaofan
Li, Jinghua
Yin, Baocai
Source :
Neurocomputing. Oct2023, Vol. 554, pN.PAG-N.PAG. 1p.
Publication Year :
2023

Abstract

3D object reconstruction from arbitrary view intensity images is a challenging but meaningful research topic in computer vision. The main limitations of existing approaches are that they lack complete and efficient prior information and might not be able to deal with serious occlusion or partial observation of 3D objects, which may produce incomplete and unreliable reconstructions. To reconstruct structure and recover missing or unseen parts of objects, category prior and intrinsic geometry relation are particularly useful and necessary during the 3D reconstruction process. In this paper, we propose Category-and-Intrinsic-Geometry Guided Network (CIGNet) for 3D coarse-to-fine reconstruction from arbitrary view intensity images by leveraging category prior and intrinsic geometry relation. CIGNet combines a category prior guided reconstruction module with an intrinsic geometry relation guided refinement module. In the first reconstruction module, we leverage semantic class context by adding a supervision term over object categories to output coarse reconstructed results. In the second refinement module, we model the coarse 3D volumetric data as 2D slices and consider intrinsic geometry relations between them to design graph structures of coarse 3D volumes to finish the graph-based refinement. CIGNet can accomplish high-quality 3D reconstruction tasks by exploring the intra-category characteristics of objects as well as the intrinsic geometry relations of each object, both of which serve as useful complements to the visual information of images, in a coarse-to-fine fashion. Extensive quantitative and qualitative experiments on a synthetic dataset ShapeNet and real-world datasets Pix3D , Statue Model Repository , and BlendedMVS indicate that CIGNet outperforms several state-of-the-art methods in terms of accuracy and detail recovery. [ABSTRACT FROM AUTHOR]

Subjects

Subjects :
*COMPUTER vision

Details

Language :
English
ISSN :
09252312
Volume :
554
Database :
Academic Search Index
Journal :
Neurocomputing
Publication Type :
Academic Journal
Accession number :
170047145
Full Text :
https://doi.org/10.1016/j.neucom.2023.126607