Back to Search
Start Over
DepthGAN: GAN-based depth generation from semantic layouts.
- Source :
- Computational Visual Media; Jun2024, Vol. 10 Issue 3, p505-522, 18p
- Publication Year :
- 2024
-
Abstract
- Existing GAN-based generative methods are typically used for semantic image synthesis. We pose the question of whether GAN-based architectures can generate plausible depth maps and find that existing methods have difficulty in generating depth maps which reasonably represent 3D scene structure due to the lack of global geometric correlations. Thus, we propose DepthGAN, a novel method of generating a depth map using a semantic layout as input to aid construction, and manipulation of well-structured 3D scene point clouds. Specifically, we first build a feature generation model with a cascade of semantically-aware transformer blocks to obtain depth features with global structural information. For our semantically aware transformer block, we propose a mixed attention module and a semantically aware layer normalization module to better exploit semantic consistency for depth features generation. Moreover, we present a novel semantically weighted depth synthesis module, which generates adaptive depth intervals for the current scene. We generate the final depth map by using a weighted combination of semantically aware depth weights for different depth ranges. In this manner, we obtain a more accurate depth map. Extensive experiments on indoor and outdoor datasets demonstrate that DepthGAN achieves superior results both quantitatively and visually for the depth generation task. [ABSTRACT FROM AUTHOR]
- Subjects :
- POINT cloud
Subjects
Details
- Language :
- English
- ISSN :
- 20960433
- Volume :
- 10
- Issue :
- 3
- Database :
- Complementary Index
- Journal :
- Computational Visual Media
- Publication Type :
- Academic Journal
- Accession number :
- 177220795
- Full Text :
- https://doi.org/10.1007/s41095-023-0350-8