1. Generative adversarial networks with decoder–encoder output noises
- Author
-
Kaizhu Huang, Yongbin Liu, Youzhao Yang, Da-Han Wang, Guoqiang Zhong, and Wei Gao
- Subjects
0209 industrial biotechnology ,Computer science ,Cognitive Neuroscience ,02 engineering and technology ,Bayesian inference ,Pattern Recognition, Automated ,Image (mathematics) ,Random Allocation ,020901 industrial engineering & automation ,Artificial Intelligence ,Image Processing, Computer-Assisted ,0202 electrical engineering, electronic engineering, information engineering ,Humans ,business.industry ,Learnability ,Bayes Theorem ,Pattern recognition ,Function (mathematics) ,Manifold ,Data set ,020201 artificial intelligence & image processing ,Neural Networks, Computer ,Noise (video) ,Artificial intelligence ,business ,Encoder ,Generator (mathematics) - Abstract
In recent years, research on image generation has been developing very fast. The generative adversarial network (GAN) emerges as a promising framework, which uses adversarial training to improve the generative ability of its generator. However, since GAN and most of its variants use randomly sampled noises as the input of their generators, they have to learn a mapping function from a whole random distribution to the image manifold. As the structures of the random distribution and the image manifold are generally different, this results in GAN and its variants difficult to train and converge. In this paper, we propose a novel deep model called generative adversarial networks with decoder-encoder output noises (DE-GANs), which take advantage of both the adversarial training and the variational Bayesian inference to improve GAN and its variants on image generation performances. DE-GANs use a pre-trained decoder-encoder architecture to map the random noise vectors to informative ones and feed them to the generator of the adversarial networks. Since the decoder-encoder architecture is trained with the same data set as the generator, its output vectors, as the inputs of the generator, could carry the intrinsic distribution information of the training images, which greatly improves the learnability of the generator and the quality of the generated images. Extensive experiments demonstrate the effectiveness of the proposed model, DE-GANs.
- Published
- 2020