Back to Search Start Over

Efficient Scaling of Diffusion Transformers for Text-to-Image Generation

Authors :
Li, Hao
Lal, Shamit
Li, Zhiheng
Xie, Yusheng
Wang, Ying
Zou, Yang
Majumder, Orchid
Manmatha, R.
Tu, Zhuowen
Ermon, Stefano
Soatto, Stefano
Swaminathan, Ashwin
Publication Year :
2024

Abstract

We empirically study the scaling properties of various Diffusion Transformers (DiTs) for text-to-image generation by performing extensive and rigorous ablations, including training scaled DiTs ranging from 0.3B upto 8B parameters on datasets up to 600M images. We find that U-ViT, a pure self-attention based DiT model provides a simpler design and scales more effectively in comparison with cross-attention based DiT variants, which allows straightforward expansion for extra conditions and other modalities. We identify a 2.3B U-ViT model can get better performance than SDXL UNet and other DiT variants in controlled setting. On the data scaling side, we investigate how increasing dataset size and enhanced long caption improve the text-image alignment performance and the learning efficiency.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2412.12391
Document Type :
Working Paper