Back to Search Start Over

MSE-Net: generative image inpainting with multi-scale encoder.

Authors :
Yang, Yizhong
Cheng, Zhihang
Yu, Haotian
Zhang, Yongqiang
Cheng, Xin
Zhang, Zhang
Xie, Guangjun
Source :
Visual Computer. Aug2022, Vol. 38 Issue 8, p2647-2659. 13p.
Publication Year :
2022

Abstract

Image inpainting methods based on deep convolutional neural networks (DCNN), especially generative adversarial networks (GAN), have made tremendous progress, due to their forceful representation capabilities. These methods can generate visually reasonable contents and textures; however, the existing deep models based on a single receptive field type usually not only cause image artifacts and content mismatches but also ignore the correlation between the hole region and long-distance spatial locations in the image. To address the above problems, in this paper, we propose a new generative model based on GAN, which is composed of a two-stage encoder–decoder with a Multi-Scale Encoder Network (MSE-Net) and a new Contextual Attention Model based on the Absolute Value (CAM-AV). The former utilizes different-size convolution kernels to encode features, which improves the ability of abstract feature characterization. The latter uses a new search algorithm to enhance the matching of features in the network. Our network is a fully convolutional network that can complete holes of arbitrary size, number, and spatial location in the image. Experiments with regular and irregular inpainting on different datasets including CelebA and Places2 demonstrate that the proposed method achieves higher quality inpainting results with reasonable contents than the most existing state-of-the-art methods. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
01782789
Volume :
38
Issue :
8
Database :
Academic Search Index
Journal :
Visual Computer
Publication Type :
Academic Journal
Accession number :
158062033
Full Text :
https://doi.org/10.1007/s00371-021-02143-0