Back to Search Start Over

How to train your pre-trained GAN models.

Authors :
Park, Sung-Wook
Kim, Jun-Yeong
Park, Jun
Jung, Se-Hoon
Sim, Chun-Bo
Source :
Applied Intelligence; Nov2023, Vol. 53 Issue 22, p27001-27026, 26p
Publication Year :
2023

Abstract

Generative Adversarial Networks (GAN) show excellent performance in various problems of computer vision, computer graphics, and machine learning, but require large amounts of data and huge computational resources. There is also the issue of unstable training. If the generator and discriminator diverge during the training process, the GAN is subsequently difficult to converge. In order to tackle these problems, various transfer learning methods have been introduced; however, mode collapse, which is a form of overfitting, often arises. Moreover, there were limitations in learning the distribution of the training data. In this paper, we provide a comprehensive review of the latest transfer learning methods as a solution to the problem, propose the most effective method of fixing some layers of the generator and discriminator, and discuss future prospects. The model to be used for the experiment is StyleGAN, and the performance evaluation uses Fréchet Inception Distance (FID), coverage, and density. Results of the experiment revealed that the proposed method did not overfit. The model was able to learn the distribution of the training data relatively well compared to the previously proposed methods. Moreover, it outperformed existing methods at the Stanford Cars, Stanford Dogs, Oxford Flower, Caltech-256, CUB-200–2011, and Insect-30 datasets. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
0924669X
Volume :
53
Issue :
22
Database :
Complementary Index
Journal :
Applied Intelligence
Publication Type :
Academic Journal
Accession number :
173178585
Full Text :
https://doi.org/10.1007/s10489-023-04807-x