Back to Search Start Over

Non-confusing Generation of Customized Concepts in Diffusion Models

Authors :
Lin, Wang
Chen, Jingyuan
Shi, Jiaxin
Zhu, Yichen
Liang, Chen
Miao, Junzhong
Jin, Tao
Zhao, Zhou
Wu, Fei
Yan, Shuicheng
Zhang, Hanwang
Publication Year :
2024

Abstract

We tackle the common challenge of inter-concept visual confusion in compositional concept generation using text-guided diffusion models (TGDMs). It becomes even more pronounced in the generation of customized concepts, due to the scarcity of user-provided concept visual examples. By revisiting the two major stages leading to the success of TGDMs -- 1) contrastive image-language pre-training (CLIP) for text encoder that encodes visual semantics, and 2) training TGDM that decodes the textual embeddings into pixels -- we point that existing customized generation methods only focus on fine-tuning the second stage while overlooking the first one. To this end, we propose a simple yet effective solution called CLIF: contrastive image-language fine-tuning. Specifically, given a few samples of customized concepts, we obtain non-confusing textual embeddings of a concept by fine-tuning CLIP via contrasting a concept and the over-segmented visual regions of other concepts. Experimental results demonstrate the effectiveness of CLIF in preventing the confusion of multi-customized concept generation.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.06914
Document Type :
Working Paper