Back to Search
Start Over
Improving diversity and discriminability based implicit contrastive learning for unsupervised domain adaptation.
- Source :
- Applied Intelligence; Oct2024, Vol. 54 Issue 19, p10007-10017, 11p
- Publication Year :
- 2024
-
Abstract
- In unsupervised domain adaptation (UDA), knowledge is transferred from label-rich source domains to relevant but unlabeled target domains. Current most popular state-of-the-art works suggest that performing domain alignment from the class perspective can alleviate domain shift. However, most of them based on domain adversarial which is hard to train and converge. In this paper, we propose a novel contrastive learning to improve diversity and discriminability for domain adaptation, dubbed as IDD_ICL, which improve the discriminativeness of the model while increasing the sample diversity. To be precise, we first design a novel implicits contrastive learning loss at sample-level by implicit augment sample of the source domain. While augmenting the diversity of the source domain, we can cluster the samples of the same category in the source domain together, and disperse the samples of different categories, thereby improving the discriminative ability of the model. Furthermore, we show that our algorithm is effective by implicitly learning an infinite number of similar samples. Our results demonstrate that our method doesn't require complex technologies or specialized equipment, making it readily adoptable and applicable in practical scenarios. [ABSTRACT FROM AUTHOR]
- Subjects :
- IMPLICIT learning
LEARNING ability
CLUSTER sampling
KNOWLEDGE transfer
ALGORITHMS
Subjects
Details
- Language :
- English
- ISSN :
- 0924669X
- Volume :
- 54
- Issue :
- 19
- Database :
- Complementary Index
- Journal :
- Applied Intelligence
- Publication Type :
- Academic Journal
- Accession number :
- 179041498
- Full Text :
- https://doi.org/10.1007/s10489-024-05351-y