Back to Search Start Over

Label-free Knowledge Distillation with Contrastive Loss for Light-weight Speaker Recognition

Authors :
Peng, Zhiyuan
He, Xuanji
Ding, Ke
Lee, Tan
Wan, Guanglu
Publication Year :
2022

Abstract

Very deep models for speaker recognition (SR) have demonstrated remarkable performance improvement in recent research. However, it is impractical to deploy these models for on-device applications with constrained computational resources. On the other hand, light-weight models are highly desired in practice despite their sub-optimal performance. This research aims to improve light-weight SR models through large-scale label-free knowledge distillation (KD). Existing KD approaches for SR typically require speaker labels to learn task-specific knowledge, due to the inefficiency of conventional loss for distillation. To address the inefficiency problem and achieve label-free KD, we propose to employ the contrastive loss from self-supervised learning for distillation. Extensive experiments are conducted on a collection of public speech datasets from diverse sources. Results on light-weight SR models show that the proposed approach of label-free KD with contrastive loss consistently outperforms both conventional distillation methods and self-supervised learning methods by a significant margin.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2212.03090
Document Type :
Working Paper