Back to Search Start Over

Knowledge Distillation with Refined Logits

Authors :
Sun, Wujie
Chen, Defang
Lyu, Siwei
Chen, Genlang
Chen, Chun
Wang, Can
Publication Year :
2024

Abstract

Recent research on knowledge distillation has increasingly focused on logit distillation because of its simplicity, effectiveness, and versatility in model compression. In this paper, we introduce Refined Logit Distillation (RLD) to address the limitations of current logit distillation methods. Our approach is motivated by the observation that even high-performing teacher models can make incorrect predictions, creating a conflict between the standard distillation loss and the cross-entropy loss. This conflict can undermine the consistency of the student model's learning objectives. Previous attempts to use labels to empirically correct teacher predictions may undermine the class correlation. In contrast, our RLD employs labeling information to dynamically refine teacher logits. In this way, our method can effectively eliminate misleading information from the teacher while preserving crucial class correlations, thus enhancing the value and efficiency of distilled knowledge. Experimental results on CIFAR-100 and ImageNet demonstrate its superiority over existing methods. The code is provided at \text{https://github.com/zju-SWJ/RLD}.<br />Comment: 11 pages, 7 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2408.07703
Document Type :
Working Paper