Back to Search Start Over

Dynamic Prompt Learning: Addressing Cross-Attention Leakage for Text-Based Image Editing

Authors :
Wang, Kai
Yang, Fei
Yang, Shiqi
Butt, Muhammad Atif
van de Weijer, Joost
Publication Year :
2023

Abstract

Large-scale text-to-image generative models have been a ground-breaking development in generative AI, with diffusion models showing their astounding ability to synthesize convincing images following an input text prompt. The goal of image editing research is to give users control over the generated images by modifying the text prompt. Current image editing techniques are susceptible to unintended modifications of regions outside the targeted area, such as on the background or on distractor objects which have some semantic or visual relationship with the targeted object. According to our experimental findings, inaccurate cross-attention maps are at the root of this problem. Based on this observation, we propose Dynamic Prompt Learning (DPL) to force cross-attention maps to focus on correct noun words in the text prompt. By updating the dynamic tokens for nouns in the textual input with the proposed leakage repairment losses, we achieve fine-grained image editing over particular objects while preventing undesired changes to other image regions. Our method DPL, based on the publicly available Stable Diffusion, is extensively evaluated on a wide range of images, and consistently obtains superior results both quantitatively (CLIP score, Structure-Dist) and qualitatively (on user-evaluation). We show improved prompt editing results for Word-Swap, Prompt Refinement, and Attention Re-weighting, especially for complex multi-object scenes.<br />Comment: Neurips 2023. The code page: https://github.com/wangkai930418/DPL

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2309.15664
Document Type :
Working Paper