Back to Search Start Over

Frankenstein: Generating Semantic-Compositional 3D Scenes in One Tri-Plane

Authors :
Yan, Han
Li, Yang
Wu, Zhennan
Chen, Shenzhou
Sun, Weixuan
Shang, Taizhang
Liu, Weizhe
Chen, Tian
Dai, Xiaqiang
Ma, Chao
Li, Hongdong
Ji, Pan
Publication Year :
2024

Abstract

We present Frankenstein, a diffusion-based framework that can generate semantic-compositional 3D scenes in a single pass. Unlike existing methods that output a single, unified 3D shape, Frankenstein simultaneously generates multiple separated shapes, each corresponding to a semantically meaningful part. The 3D scene information is encoded in one single tri-plane tensor, from which multiple Singed Distance Function (SDF) fields can be decoded to represent the compositional shapes. During training, an auto-encoder compresses tri-planes into a latent space, and then the denoising diffusion process is employed to approximate the distribution of the compositional scenes. Frankenstein demonstrates promising results in generating room interiors as well as human avatars with automatically separated parts. The generated scenes facilitate many downstream applications, such as part-wise re-texturing, object rearrangement in the room or avatar cloth re-targeting. Our project page is available at: https://wolfball.github.io/frankenstein/.<br />Comment: SIGGRAPH Asia 2024 Conference Paper

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2403.16210
Document Type :
Working Paper
Full Text :
https://doi.org/10.1145/3680528.3687672