1. Quantifying Knowledge Distillation Using Partial Information Decomposition
- Author
-
Dissanayake, Pasan, Hamman, Faisal, Halder, Barproda, Sucholutsky, Ilia, Zhang, Qiuyi, and Dutta, Sanghamitra
- Subjects
Statistics - Machine Learning ,Computer Science - Computer Vision and Pattern Recognition ,Computer Science - Information Theory ,Computer Science - Machine Learning ,Electrical Engineering and Systems Science - Image and Video Processing - Abstract
Knowledge distillation provides an effective method for deploying complex machine learning models in resource-constrained environments. It typically involves training a smaller student model to emulate either the probabilistic outputs or the internal feature representations of a larger teacher model. By doing so, the student model often achieves substantially better performance on a downstream task compared to when it is trained independently. Nevertheless, the teacher's internal representations can also encode noise or additional information that may not be relevant to the downstream task. This observation motivates our primary question: What are the information-theoretic limits of knowledge transfer? To this end, we leverage a body of work in information theory called Partial Information Decomposition (PID) to quantify the distillable and distilled knowledge of a teacher's representation corresponding to a given student and a downstream task. Moreover, we demonstrate that this metric can be practically used in distillation to address challenges caused by the complexity gap between the teacher and the student representations., Comment: Accepted at NeurIPS 2024 Machine Learning and Compression Workshop
- Published
- 2024