Back to Search Start Over

One-to-one Mapping for Unpaired Image-to-image Translation

Authors :
Shen, Zengming
Zhou, S. Kevin
Chen, Yifan
Georgescu, Bogdan
Liu, Xuqi
Huang, Thomas S.
Publication Year :
2019

Abstract

Recently image-to-image translation has attracted significant interests in the literature, starting from the successful use of the generative adversarial network (GAN), to the introduction of cyclic constraint, to extensions to multiple domains. However, in existing approaches, there is no guarantee that the mapping between two image domains is unique or one-to-one. Here we propose a self-inverse network learning approach for unpaired image-to-image translation. Building on top of CycleGAN, we learn a self-inverse function by simply augmenting the training samples by swapping inputs and outputs during training and with separated cycle consistency loss for each mapping direction. The outcome of such learning is a proven one-to-one mapping function. Our extensive experiments on a variety of datasets, including cross-modal medical image synthesis, object transfiguration, and semantic labeling, consistently demonstrate clear improvement over the CycleGAN method both qualitatively and quantitatively. Especially our proposed method reaches the state-of-the-art result on the cityscapes benchmark dataset for the label to photo unpaired directional image translation.<br />Comment: Accepted by WACV 2020

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1909.04110
Document Type :
Working Paper