Back to Search Start Over

Robust Knowledge Distillation from RNN-T Models With Noisy Training Labels Using Full-Sum Loss

Authors :
Zeineldeen, Mohammad
Audhkhasi, Kartik
Baskar, Murali Karthick
Ramabhadran, Bhuvana
Publication Year :
2023

Abstract

This work studies knowledge distillation (KD) and addresses its constraints for recurrent neural network transducer (RNN-T) models. In hard distillation, a teacher model transcribes large amounts of unlabelled speech to train a student model. Soft distillation is another popular KD method that distills the output logits of the teacher model. Due to the nature of RNN-T alignments, applying soft distillation between RNN-T architectures having different posterior distributions is challenging. In addition, bad teachers having high word-error-rate (WER) reduce the efficacy of KD. We investigate how to effectively distill knowledge from variable quality ASR teachers, which has not been studied before to the best of our knowledge. We show that a sequence-level KD, full-sum distillation, outperforms other distillation methods for RNN-T models, especially for bad teachers. We also propose a variant of full-sum distillation that distills the sequence discriminative knowledge of the teacher leading to further improvement in WER. We conduct experiments on public datasets namely SpeechStew and LibriSpeech, and on in-house production data.<br />Comment: Accepted at ICASSP 2023

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2303.05958
Document Type :
Working Paper