Back to Search Start Over

GGHead: Fast and Generalizable 3D Gaussian Heads

Authors :
Kirschstein, Tobias
Giebenhain, Simon
Tang, Jiapeng
Georgopoulos, Markos
Nießner, Matthias
Publication Year :
2024

Abstract

Learning 3D head priors from large 2D image collections is an important step towards high-quality 3D-aware human modeling. A core requirement is an efficient architecture that scales well to large-scale datasets and large image resolutions. Unfortunately, existing 3D GANs struggle to scale to generate samples at high resolutions due to their relatively slow train and render speeds, and typically have to rely on 2D superresolution networks at the expense of global 3D consistency. To address these challenges, we propose Generative Gaussian Heads (GGHead), which adopts the recent 3D Gaussian Splatting representation within a 3D GAN framework. To generate a 3D representation, we employ a powerful 2D CNN generator to predict Gaussian attributes in the UV space of a template head mesh. This way, GGHead exploits the regularity of the template's UV layout, substantially facilitating the challenging task of predicting an unstructured set of 3D Gaussians. We further improve the geometric fidelity of the generated 3D representations with a novel total variation loss on rendered UV coordinates. Intuitively, this regularization encourages that neighboring rendered pixels should stem from neighboring Gaussians in the template's UV space. Taken together, our pipeline can efficiently generate 3D heads trained only from single-view 2D image observations. Our proposed framework matches the quality of existing 3D head GANs on FFHQ while being both substantially faster and fully 3D consistent. As a result, we demonstrate real-time generation and rendering of high-quality 3D-consistent heads at $1024^2$ resolution for the first time. Project Website: https://tobias-kirschstein.github.io/gghead<br />Comment: Project Page: https://tobias-kirschstein.github.io/gghead/ ; YouTube Video: https://youtu.be/M5vq3DoZ7RI

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.09377
Document Type :
Working Paper
Full Text :
https://doi.org/10.1145/3680528.3687686