Back to Search Start Over

Distillation from Heterogeneous Models for Top-K Recommendation

Authors :
Kang, SeongKu
Kweon, Wonbin
Lee, Dongha
Lian, Jianxun
Xie, Xing
Yu, Hwanjo
Publication Year :
2023

Abstract

Recent recommender systems have shown remarkable performance by using an ensemble of heterogeneous models. However, it is exceedingly costly because it requires resources and inference latency proportional to the number of models, which remains the bottleneck for production. Our work aims to transfer the ensemble knowledge of heterogeneous teachers to a lightweight student model using knowledge distillation (KD), to reduce the huge inference costs while retaining high accuracy. Through an empirical study, we find that the efficacy of distillation severely drops when transferring knowledge from heterogeneous teachers. Nevertheless, we show that an important signal to ease the difficulty can be obtained from the teacher's training trajectory. This paper proposes a new KD framework, named HetComp, that guides the student model by transferring easy-to-hard sequences of knowledge generated from the teachers' trajectories. To provide guidance according to the student's learning state, HetComp uses dynamic knowledge construction to provide progressively difficult ranking knowledge and adaptive knowledge transfer to gradually transfer finer-grained ranking information. Our comprehensive experiments show that HetComp significantly improves the distillation quality and the generalization of the student model.<br />Comment: TheWebConf'23

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2303.01130
Document Type :
Working Paper