Back to Search Start Over

Unsupervised Object-Level Image-to-Image Translation Using Positional Attention Bi-Flow Generative Network

Authors :
Liuchun Yuan
Dihu Chen
Haifeng Hu
Source :
IEEE Access, Vol 7, Pp 30637-30647 (2019)
Publication Year :
2019
Publisher :
IEEE, 2019.

Abstract

Recent work in unsupervised image-to-image translation by adversarially learning mapping between different domains, which cannot distinguish the foreground and background. The existing methods of image-to-image translation mainly transfer the global image across the source and target domains. However, it is evident that not all regions of images should be transferred because forcefully transferring the unnecessary part leads to some unrealistic translations. In this paper, we present a positional attention bi-flow generative network, focusing our translation model on an interesting region or object in the image. We assume that the image representation can be decomposed into three parts: image-content, image-style, and image-position features. We apply an encoder to extract these features and bi-flow generator with attention module to achieve the translation task in an end-to-end manner. To realize the object-level translation, we adopt the image-position features to label the common interesting region between the source and target domains. We analyze the proposed framework and provide qualitative and quantitative comparisons. The extensive experiments validate that our proposed model is qualified to accomplish the object-level translation and obtain compelling results with other state-of-the-art approaches.

Details

Language :
English
ISSN :
21693536
Volume :
7
Database :
Directory of Open Access Journals
Journal :
IEEE Access
Publication Type :
Academic Journal
Accession number :
edsdoj.bd5500f2bece43758d5431d74e68fcbc
Document Type :
article
Full Text :
https://doi.org/10.1109/ACCESS.2019.2903543