Back to Search Start Over

SpaceEdit: Learning a Unified Editing Space for Open-Domain Image Editing

Authors :
Shi, Jing
Xu, Ning
Zheng, Haitian
Smith, Alex
Luo, Jiebo
Xu, Chenliang
Publication Year :
2021

Abstract

Recently, large pretrained models (e.g., BERT, StyleGAN, CLIP) have shown great knowledge transfer and generalization capability on various downstream tasks within their domains. Inspired by these efforts, in this paper we propose a unified model for open-domain image editing focusing on color and tone adjustment of open-domain images while keeping their original content and structure. Our model learns a unified editing space that is more semantic, intuitive, and easy to manipulate than the operation space (e.g., contrast, brightness, color curve) used in many existing photo editing softwares. Our model belongs to the image-to-image translation framework which consists of an image encoder and decoder, and is trained on pairs of before- and after-images to produce multimodal outputs. We show that by inverting image pairs into latent codes of the learned editing space, our model can be leveraged for various downstream editing tasks such as language-guided image editing, personalized editing, editing-style clustering, retrieval, etc. We extensively study the unique properties of the editing space in experiments and demonstrate superior performance on the aforementioned tasks.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2112.00180
Document Type :
Working Paper