Back to Search Start Over

FocusGAN: Preserving Background in Text-Guided Image Editing.

Authors :
Zhao, Liuqing
Li, Linyan
Hu, Fuyuan
Xia, Zhenping
Yao, Rui
Source :
International Journal of Pattern Recognition & Artificial Intelligence. 12/20/2021, Vol. 35 Issue 16, p1-16. 16p.
Publication Year :
2021

Abstract

Text-guided image editing (TIE) seeks to manipulate images by the guidance of language. However, the existing TIE methods always overlook the target-irrelevant pixels and the editing may make the background discolored, distorted, or partially disappear. To overcome this problem, we propose a novel TIE method named FocusGAN, which edits the text-relevant pixels precisely as well as keeps the background invariant. Specifically, we build a network of two stages. In each stage, we first construct a channel-wise subject focusing attention to make the generator focus on the sub-region that best matches each word. Then, the word-level discriminator provides fine-grained feedback by correlating words with image areas, so that the generator can manipulate specific visual attributes without affecting the background. Last, we propose a background-keeping cyclic loss to further improve the invariance of the background and to encourage the edit of the subject that matches the given text. Experiments on CUB and Oxford datasets demonstrate that our approach can effectively keep the background invariant in manipulating images using natural language descriptions. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
02180014
Volume :
35
Issue :
16
Database :
Academic Search Index
Journal :
International Journal of Pattern Recognition & Artificial Intelligence
Publication Type :
Academic Journal
Accession number :
155475399
Full Text :
https://doi.org/10.1142/S0218001421530086