Back to Search Start Over

VeCAF: Vision-language Collaborative Active Finetuning with Training Objective Awareness

Authors :
Zhang, Rongyu
Cai, Zefan
Yang, Huanrui
Liu, Zidong
Gudovskiy, Denis
Okuno, Tomoyuki
Nakata, Yohei
Keutzer, Kurt
Chang, Baobao
Du, Yuan
Du, Li
Zhang, Shanghang
Publication Year :
2024

Abstract

Finetuning a pretrained vision model (PVM) is a common technique for learning downstream vision tasks. However, the conventional finetuning process with randomly sampled data points results in diminished training efficiency. To address this drawback, we propose a novel approach, Vision-language Collaborative Active Finetuning (VeCAF). With the emerging availability of labels and natural language annotations of images through web-scale crawling or controlled generation, VeCAF makes use of these information to perform parametric data selection for PVM finetuning. VeCAF incorporates the finetuning objective to select significant data points that effectively guide the PVM towards faster convergence to meet the performance goal. This process is assisted by the inherent semantic richness of the text embedding space which we use to augment image features. Furthermore, the flexibility of text-domain augmentation allows VeCAF to handle out-of-distribution scenarios without external data. Extensive experiments show the leading performance and high computational efficiency of VeCAF that is superior to baselines in both in-distribution and out-of-distribution image classification tasks. On ImageNet, VeCAF uses up to 3.3x less training batches to reach the target performance compared to full finetuning, and achieves an accuracy improvement of 2.7% over the state-of-the-art active finetuning method with the same number of batches.<br />Comment: 13 pages

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2401.07853
Document Type :
Working Paper