Back to Search
Start Over
PConv: Simple yet Effective Convolutional Layer for Generative Adversarial Network
- Publication Year :
- 2021
-
Abstract
- This paper presents a novel convolutional layer, called perturbed convolution (PConv), which focuses on achieving two goals simultaneously: improving the generative adversarial network (GAN) performance and alleviating the memorization problem in which the discriminator memorizes all images from a given dataset as training progresses. In PConv, perturbed features are generated by randomly disturbing an input tensor before performing the convolution operation. This approach is simple but surprisingly effective. First, to produce a similar output even with the perturbed tensor, each layer in the discriminator should learn robust features having a small local Lipschitz value. Second, since the input tensor is randomly perturbed during the training procedure like the dropout in neural networks, the memorization problem could be alleviated. To show the generalization ability of the proposed method, we conducted extensive experiments with various loss functions and datasets including CIFAR-10, CelebA, CelebA-HQ, LSUN, and tiny-ImageNet. The quantitative evaluations demonstrate that PConv effectively boosts the performance of GAN and conditional GAN in terms of Frechet inception distance (FID).<br />Comment: Submitted to journal, arXiv admin note: text overlap with arXiv:1911.10979
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2101.10841
- Document Type :
- Working Paper
- Full Text :
- https://doi.org/10.1007/s00521-021-06846-2