Back to Search Start Over

Compositional 3D Scene Synthesis with Scene Graph Guided Layout-Shape Generation

Authors :
Wei, Yao
Min, Martin Renqiang
Vosselman, George
Li, Li Erran
Yang, Michael Ying
Wei, Yao
Min, Martin Renqiang
Vosselman, George
Li, Li Erran
Yang, Michael Ying
Publication Year :
2024

Abstract

Compositional 3D scene synthesis has diverse applications across a spectrum of industries such as robotics, films, and video games, as it closely mirrors the complexity of real-world multi-object environments. Early works typically employ shape retrieval based frameworks which naturally suffer from limited shape diversity. Recent progresses have been made in shape generation with powerful generative models, such as diffusion models, which increases the shape fidelity. However, these approaches separately treat 3D shape generation and layout generation. The synthesized scenes are usually hampered by layout collision, which implies that the scene-level fidelity is still under-explored. In this paper, we aim at generating realistic and reasonable 3D scenes from scene graph. To enrich the representation capability of the given scene graph inputs, large language model is utilized to explicitly aggregate the global graph features with local relationship features. With a unified graph convolution network (GCN), graph features are extracted from scene graphs updated via joint layout-shape distribution. During scene generation, an IoU-based regularization loss is introduced to constrain the predicted 3D layouts. Benchmarked on the SG-FRONT dataset, our method achieves better 3D scene synthesis, especially in terms of scene-level fidelity. The source code will be released after publication.

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1438537828
Document Type :
Electronic Resource