Back to Search Start Over

TEXGen: a Generative Diffusion Model for Mesh Textures.

Authors :
Yu, Xin
Yuan, Ze
Guo, Yuan-Chen
Liu, Ying-Tian
Liu, Jianhui
Li, Yangguang
Cao, Yan-Pei
Liang, Ding
Qi, Xiaojuan
Source :
ACM Transactions on Graphics; Dec2024, Vol. 43 Issue 6, p1-14, 14p
Publication Year :
2024

Abstract

While high-quality texture maps are essential for realistic 3D asset rendering, few studies have explored learning directly in the texture space, especially on large-scale datasets. In this work, we depart from the conventional approach of relying on pre-trained 2D diffusion models for testtime optimization of 3D textures. Instead, we focus on the fundamental problem of learning in the UV texture space itself. For the first time, we train a large diffusion model capable of directly generating high-resolution texture maps in a feed-forward manner. To facilitate efficient learning in high-resolution UV spaces, we propose a scalable network architecture that interleaves convolutions on UV maps with attention layers on point clouds. Leveraging this architectural design, we train a 700 million parameter diffusion model that can generate UV texture maps guided by text prompts and single-view images. Once trained, our model naturally supports various extended applications, including text-guided texture inpainting, sparse-view texture completion, and text-driven texture synthesis. The code is available at https://github.com/CVMI-Lab/TEXGen. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
07300301
Volume :
43
Issue :
6
Database :
Complementary Index
Journal :
ACM Transactions on Graphics
Publication Type :
Academic Journal
Accession number :
180967063
Full Text :
https://doi.org/10.1145/3687909