Back to Search Start Over

Rethinking Domain Adaptation and Generalization in the Era of CLIP

Authors :
Feng, Ruoyu
Yu, Tao
Jin, Xin
Yu, Xiaoyuan
Xiao, Lei
Chen, Zhibo
Publication Year :
2024

Abstract

In recent studies on domain adaptation, significant emphasis has been placed on the advancement of learning shared knowledge from a source domain to a target domain. Recently, the large vision-language pre-trained model, i.e., CLIP has shown strong ability on zero-shot recognition, and parameter efficient tuning can further improve its performance on specific tasks. This work demonstrates that a simple domain prior boosts CLIP's zero-shot recognition in a specific domain. Besides, CLIP's adaptation relies less on source domain data due to its diverse pre-training dataset. Furthermore, we create a benchmark for zero-shot adaptation and pseudo-labeling based self-training with CLIP. Last but not least, we propose to improve the task generalization ability of CLIP from multiple unlabeled domains, which is a more practical and unique scenario. We believe our findings motivate a rethinking of domain adaptation benchmarks and the associated role of related algorithms in the era of CLIP.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.15173
Document Type :
Working Paper