1. 判别增强的生成对抗模型在文本至图像生成中的研究与应用.
- Author
-
谭红臣, 黄世华, 肖贺文, 于冰冰, and 刘秀平
- Abstract
Based on Generative Adversarial Networks (GANs), most current text-to-image generation algorithms focus on designing different attention generation models to improve the characterization and expression of image details. However, they ignore the discriminators perception of key local semantics, so the generation models can easily generate poor image details to "fool" the discriminators. This paper designs a vocabulary-image discriminative attention module in the discriminators to enhance the discriminators ability to perceive and capture key semantics, and drive the generation model to generate high-quality image details. Therefore, a discrimination-enhanced generative adversarial model (DE-GAN) is proposed. The experimental results show that, on the CUB-Bird dataset, DE-GAN achieves 4.70 on the IS index, which is 4.2% higher than the baseline model and achieves high performance. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF