Back to Search Start Over

RegionDrag: Fast Region-Based Image Editing with Diffusion Models

Authors :
Lu, Jingyi
Li, Xinghui
Han, Kai
Publication Year :
2024

Abstract

Point-drag-based image editing methods, like DragDiffusion, have attracted significant attention. However, point-drag-based approaches suffer from computational overhead and misinterpretation of user intentions due to the sparsity of point-based editing instructions. In this paper, we propose a region-based copy-and-paste dragging method, RegionDrag, to overcome these limitations. RegionDrag allows users to express their editing instructions in the form of handle and target regions, enabling more precise control and alleviating ambiguity. In addition, region-based operations complete editing in one iteration and are much faster than point-drag-based methods. We also incorporate the attention-swapping technique for enhanced stability during editing. To validate our approach, we extend existing point-drag-based datasets with region-based dragging instructions. Experimental results demonstrate that RegionDrag outperforms existing point-drag-based approaches in terms of speed, accuracy, and alignment with user intentions. Remarkably, RegionDrag completes the edit on an image with a resolution of 512x512 in less than 2 seconds, which is more than 100x faster than DragDiffusion, while achieving better performance. Project page: https://visual-ai.github.io/regiondrag.<br />Comment: ECCV 2024, Project page: https://visual-ai.github.io/regiondrag

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.18247
Document Type :
Working Paper