Back to Search Start Over

Factorized and progressive knowledge distillation for CTC-based ASR models.

Authors :
Tian, Sanli
Li, Zehan
Lyv, Zhaobiao
Cheng, Gaofeng
Xiao, Qing
Li, Ta
Zhao, Qingwei
Source :
Speech Communication. May2024, Vol. 160, pN.PAG-N.PAG. 1p.
Publication Year :
2024

Abstract

Knowledge distillation (KD) is a popular model compression method to improve the performance of lightweight models by transferring knowledge from a teacher model to a student model. However, applying KD to connectionist temporal classification (CTC) ASR model is challenging due to its peaky posterior property. In this paper, we propose to address this issue by treating non-blank and blank frames differently for two main reasons. First, the non-blank frames in the teacher model's posterior matrix and hidden representations provide more acoustic and linguistic information than the blank frames, but the frame number of non-blank frames only accounts for a small fraction of all frames, leading to a severe learning imbalance problem. Second, the non-blank tokens in the teacher's blank-frame posteriors exhibit irregular probability distributions, negatively impacting the student model's learning. Thus, we propose to factorize the distillation of non-blank and blank frames and further combine them into a progressive KD framework, which contains three incremental stages to facilitate the student model gradually building up its knowledge. The first stage involves a simple binary classification KD task, in which the student learns to distinguish between non-blank and blank frames, as the two types of frames are learned separately in subsequent stages. The second stage is a factorized representation-based KD, in which hidden representations are divided into non-blank and blank frames so that both can be distilled in a balanced manner. In the third stage, the student learns from the teacher's posterior matrix through our proposed method, factorized KL-divergence (FKL), which performs different operation on blank and non-blank frame posteriors to alleviate the imbalance issue and reduce the influence of irregular probability distributions. Compared to the baseline, our proposed method achieves 22.5% relative CER reduction on the Aishell-1 dataset, 23.0% relative WER reduction on the Tedlium-2 dataset, and 17.6% relative WER reduction on the LibriSpeech dataset. To show the generalization of our method, we also evaluate our method on the hybrid CTC/Attention architecture as well as on scenarios with cross-model topology KD. • We explore why the conventional KD underperforms when applied to CTC models. • We propose Factorized KL-divergence for CTC-based models' KD. • We propose a progressive KD framework to gradually build up the student's knowledge. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
01676393
Volume :
160
Database :
Academic Search Index
Journal :
Speech Communication
Publication Type :
Academic Journal
Accession number :
177483652
Full Text :
https://doi.org/10.1016/j.specom.2024.103071