Back to Search Start Over

Training-free Regional Prompting for Diffusion Transformers

Authors :
Chen, Anthony
Xu, Jianjin
Zheng, Wenzhao
Dai, Gaole
Wang, Yida
Zhang, Renrui
Wang, Haofan
Zhang, Shanghang
Publication Year :
2024

Abstract

Diffusion models have demonstrated excellent capabilities in text-to-image generation. Their semantic understanding (i.e., prompt following) ability has also been greatly improved with large language models (e.g., T5, Llama). However, existing models cannot perfectly handle long and complex text prompts, especially when the text prompts contain various objects with numerous attributes and interrelated spatial relationships. While many regional prompting methods have been proposed for UNet-based models (SD1.5, SDXL), but there are still no implementations based on the recent Diffusion Transformer (DiT) architecture, such as SD3 and FLUX.1.In this report, we propose and implement regional prompting for FLUX.1 based on attention manipulation, which enables DiT with fined-grained compositional text-to-image generation capability in a training-free manner. Code is available at https://github.com/antonioo-c/Regional-Prompting-FLUX.<br />Comment: Code is available at https://github.com/antonioo-c/Regional-Prompting-FLUX

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2411.02395
Document Type :
Working Paper