Back to Search Start Over

GD-StarGAN: Multi-domain image-to-image translation in garment design.

Authors :
Shen, Yangyun
Huang, Runnan
Huang, Wenkai
Source :
PLoS ONE. 4/21/2020, Vol. 15 Issue 4, p1-15. 15p.
Publication Year :
2020

Abstract

In the field of fashion design, designing garment image according to texture is actually changing the shape of texture image, and image-to-image translation based on Generative Adversarial Network (GAN) can do this well. This can help fashion designers save a lot of time and energy. GAN-based image-to-image translation has made great progress in recent years. One of the image-to-image translation models––StarGAN, has realized the function of multi-domain image-to-image translation by using only a single generator and a single discriminator. This paper details the use of StarGAN to complete the task of garment design. Users only need to input an image and a label for the garment type to generate garment images with the texture of the input image. However, it was found that the quality of the generated images is not satisfactory. Therefore, this paper introduces some improvements on the structure of the StarGAN generator and the loss function of StarGAN, and a model was obtained that can be better applied to garment design. It is called GD-StarGAN. This paper will demonstrate that GD-StarGAN is much better than StarGAN when it comes to garment design, especially in texture, by using a set of seven categories of garment datasets. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
19326203
Volume :
15
Issue :
4
Database :
Academic Search Index
Journal :
PLoS ONE
Publication Type :
Academic Journal
Accession number :
142835859
Full Text :
https://doi.org/10.1371/journal.pone.0231719