Back to Search Start Over

Image-to-image translation for cross-domain disentanglement

Authors :
Gonzalez-Garcia, Abel
van de Weijer, Joost
Bengio, Yoshua
Publication Year :
2018

Abstract

Deep image translation methods have recently shown excellent results, outputting high-quality images covering multiple modes of the data distribution. There has also been increased interest in disentangling the internal representations learned by deep methods to further improve their performance and achieve a finer control. In this paper, we bridge these two objectives and introduce the concept of cross-domain disentanglement. We aim to separate the internal representation into three parts. The shared part contains information for both domains. The exclusive parts, on the other hand, contain only factors of variation that are particular to each domain. We achieve this through bidirectional image translation based on Generative Adversarial Networks and cross-domain autoencoders, a novel network component. Our model offers multiple advantages. We can output diverse samples covering multiple modes of the distributions of both domains, perform domain-specific image transfer and interpolation, and cross-domain retrieval without the need of labeled data, only paired images. We compare our model to the state-of-the-art in multi-modal image translation and achieve better results for translation on challenging datasets as well as for cross-domain retrieval on realistic datasets.<br />Comment: Accepted to NIPS 2018

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1805.09730
Document Type :
Working Paper