Back to Search Start Over

Text-Guided Neural Image Inpainting

Authors :
Zhang, Lisai
Chen, Qingcai
Hu, Baotian
Jiang, Shuoran
Publication Year :
2020

Abstract

Image inpainting task requires filling the corrupted image with contents coherent with the context. This research field has achieved promising progress by using neural image inpainting methods. Nevertheless, there is still a critical challenge in guessing the missed content with only the context pixels. The goal of this paper is to fill the semantic information in corrupted images according to the provided descriptive text. Unique from existing text-guided image generation works, the inpainting models are required to compare the semantic content of the given text and the remaining part of the image, then find out the semantic content that should be filled for missing part. To fulfill such a task, we propose a novel inpainting model named Text-Guided Dual Attention Inpainting Network (TDANet). Firstly, a dual multimodal attention mechanism is designed to extract the explicit semantic information about the corrupted regions, which is done by comparing the descriptive text and complementary image areas through reciprocal attention. Secondly, an image-text matching loss is applied to maximize the semantic similarity of the generated image and the text. Experiments are conducted on two open datasets. Results show that the proposed TDANet model reaches new state-of-the-art on both quantitative and qualitative measures. Result analysis suggests that the generated images are consistent with the guidance text, enabling the generation of various results by providing different descriptions. Codes are available at https://github.com/idealwhite/TDANet<br />Comment: ACM MM'2020 (Oral). 9 pages, 4 tables, 7 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2004.03212
Document Type :
Working Paper
Full Text :
https://doi.org/10.1145/3394171.3414017