Back to Search Start Over

NeuSDFusion: A Spatial-Aware Generative Model for 3D Shape Completion, Reconstruction, and Generation

Authors :
Cui, Ruikai
Liu, Weizhe
Sun, Weixuan
Wang, Senbo
Shang, Taizhang
Li, Yang
Song, Xibin
Yan, Han
Wu, Zhennan
Chen, Shenzhou
Li, Hongdong
Ji, Pan
Cui, Ruikai
Liu, Weizhe
Sun, Weixuan
Wang, Senbo
Shang, Taizhang
Li, Yang
Song, Xibin
Yan, Han
Wu, Zhennan
Chen, Shenzhou
Li, Hongdong
Ji, Pan
Publication Year :
2024

Abstract

3D shape generation aims to produce innovative 3D content adhering to specific conditions and constraints. Existing methods often decompose 3D shapes into a sequence of localized components, treating each element in isolation without considering spatial consistency. As a result, these approaches exhibit limited versatility in 3D data representation and shape generation, hindering their ability to generate highly diverse 3D shapes that comply with the specified constraints. In this paper, we introduce a novel spatial-aware 3D shape generation framework that leverages 2D plane representations for enhanced 3D shape modeling. To ensure spatial coherence and reduce memory usage, we incorporate a hybrid shape representation technique that directly learns a continuous signed distance field representation of the 3D shape using orthogonal 2D planes. Additionally, we meticulously enforce spatial correspondences across distinct planes using a transformer-based autoencoder structure, promoting the preservation of spatial relationships in the generated 3D shapes. This yields an algorithm that consistently outperforms state-of-the-art 3D shape generation methods on various tasks, including unconditional shape generation, multi-modal shape completion, single-view reconstruction, and text-to-shape synthesis.

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1438541757
Document Type :
Electronic Resource