Back to Search Start Over

Domain Conditioned Adaptation Network

Authors :
Shuang Li
Gao Huang
Binhui Xie
Qiuxia Lin
Jian Tang
Zhengming Ding
Chi Harold Liu
Source :
AAAI
Publication Year :
2020
Publisher :
Association for the Advancement of Artificial Intelligence (AAAI), 2020.

Abstract

Tremendous research efforts have been made to thrive deep domain adaptation (DA) by seeking domain-invariant features. Most existing deep DA models only focus on aligning feature representations of task-specific layers across domains while integrating a totally shared convolutional architecture for source and target. However, we argue that such strongly-shared convolutional layers might be harmful for domain-specific feature learning when source and target data distribution differs to a large extent. In this paper, we relax a shared-convnets assumption made by previous DA methods and propose a Domain Conditioned Adaptation Network (DCAN), which aims to excite distinct convolutional channels with a domain conditioned channel attention mechanism. As a result, the critical low-level domain-dependent knowledge could be explored appropriately. As far as we know, this is the first work to explore the domain-wise convolutional channel activation for deep DA networks. Moreover, to effectively align high-level feature distributions across two domains, we further deploy domain conditioned feature correction blocks after task-specific layers, which will explicitly correct the domain discrepancy. Extensive experiments on three cross-domain benchmarks demonstrate the proposed approach outperforms existing methods by a large margin, especially on very tough cross-domain learning tasks.<br />Accepted by AAAI 2020

Details

ISSN :
23743468 and 21595399
Volume :
34
Database :
OpenAIRE
Journal :
Proceedings of the AAAI Conference on Artificial Intelligence
Accession number :
edsair.doi.dedup.....3fa0337d1972b52f732b9e0c54864d14
Full Text :
https://doi.org/10.1609/aaai.v34i07.6801