Back to Search Start Over

VLM-KD: Knowledge Distillation from VLM for Long-Tail Visual Recognition

Authors :
Zhang, Zaiwei
Meyer, Gregory P.
Lu, Zhichao
Shrivastava, Ashish
Ravichandran, Avinash
Wolff, Eric M.
Publication Year :
2024

Abstract

For visual recognition, knowledge distillation typically involves transferring knowledge from a large, well-trained teacher model to a smaller student model. In this paper, we introduce an effective method to distill knowledge from an off-the-shelf vision-language model (VLM), demonstrating that it provides novel supervision in addition to those from a conventional vision-only teacher model. Our key technical contribution is the development of a framework that generates novel text supervision and distills free-form text into a vision encoder. We showcase the effectiveness of our approach, termed VLM-KD, across various benchmark datasets, showing that it surpasses several state-of-the-art long-tail visual classifiers. To our knowledge, this work is the first to utilize knowledge distillation with text supervision generated by an off-the-shelf VLM and apply it to vanilla randomly initialized vision encoders.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2408.16930
Document Type :
Working Paper