Back to Search
Start Over
ISA-GAN: inception-based self-attentive encoder–decoder network for face synthesis using delineated facial images.
- Source :
-
Visual Computer . Nov2024, Vol. 40 Issue 11, p8205-8225. 21p. - Publication Year :
- 2024
-
Abstract
- Facial image synthesis using delineated face images is a complex task associated with computer vision. The delineated face image synthesis such as sketch to face generation or thermal to visual face generation using generative adversarial networks (GANs) is a widely accepted task due to its generative capability. The accurate and realistic sample generation by using GAN with an attention network shows a more realistic sample generation. Attention-based network improves the network's learning by prioritizing the learning over a specific region. Motivated by the success of attention mechanism in recent literature, we develop a new inception-based encoder–decoder self-attentive generative adversarial network (ISA-GAN) by incorporating an inception network with self-attention-based learning. The proposed network is embedded with parallel self-attention, which helps to generate high-quality images and converges faster in terms of training epochs. The proposed scenario has been experimented for face synthesis using delineated face images of sketch images over the CUHK dataset. We also test it over thermal to visual face synthesis using WHU-IIP and CVBL-CHILD datasets. The proposed ISA-GAN outperforms the state-of-the-art generative models for face synthesis. The proposed ISA-GAN shows on an average improvement of 9.95 % in SSIM score over CUHK dataset while 10.38 % , 12.58 % for WHU-IIP and CVBL-CHILD datasets, respectively. [ABSTRACT FROM AUTHOR]
- Subjects :
- *GENERATIVE adversarial networks
*COMPUTER vision
*MOTIVATION (Psychology)
Subjects
Details
- Language :
- English
- ISSN :
- 01782789
- Volume :
- 40
- Issue :
- 11
- Database :
- Academic Search Index
- Journal :
- Visual Computer
- Publication Type :
- Academic Journal
- Accession number :
- 180734091
- Full Text :
- https://doi.org/10.1007/s00371-023-03233-x