Back to Search
Start Over
Cycle-consistent Conditional Adversarial Transfer Networks
- Source :
- ACM Multimedia 2019
- Publication Year :
- 2019
-
Abstract
- Domain adaptation investigates the problem of cross-domain knowledge transfer where the labeled source domain and unlabeled target domain have distinctive data distributions. Recently, adversarial training have been successfully applied to domain adaptation and achieved state-of-the-art performance. However, there is still a fatal weakness existing in current adversarial models which is raised from the equilibrium challenge of adversarial training. Specifically, although most of existing methods are able to confuse the domain discriminator, they cannot guarantee that the source domain and target domain are sufficiently similar. In this paper, we propose a novel approach named {\it cycle-consistent conditional adversarial transfer networks} (3CATN) to handle this issue. Our approach takes care of the domain alignment by leveraging adversarial training. Specifically, we condition the adversarial networks with the cross-covariance of learned features and classifier predictions to capture the multimodal structures of data distributions. However, since the classifier predictions are not certainty information, a strong condition with the predictions is risky when the predictions are not accurate. We, therefore, further propose that the truly domain-invariant features should be able to be translated from one domain to the other. To this end, we introduce two feature translation losses and one cycle-consistent loss into the conditional adversarial domain adaptation networks. Extensive experiments on both classical and large-scale datasets verify that our model is able to outperform previous state-of-the-arts with significant improvements.<br />Comment: Codes at github.com/lijin118/3CATN
Details
- Database :
- arXiv
- Journal :
- ACM Multimedia 2019
- Publication Type :
- Report
- Accession number :
- edsarx.1909.07618
- Document Type :
- Working Paper