Back to Search Start Over

Sequence-Level Knowledge Distillation for Class-Incremental End-to-End Spoken Language Understanding

Authors :
Cappellazzo, Umberto
Yang, Muqiao
Falavigna, Daniele
Brutti, Alessio
Publication Year :
2023

Abstract

The ability to learn new concepts sequentially is a major weakness for modern neural networks, which hinders their use in non-stationary environments. Their propensity to fit the current data distribution to the detriment of the past acquired knowledge leads to the catastrophic forgetting issue. In this work we tackle the problem of Spoken Language Understanding applied to a continual learning setting. We first define a class-incremental scenario for the SLURP dataset. Then, we propose three knowledge distillation (KD) approaches to mitigate forgetting for a sequence-to-sequence transformer model: the first KD method is applied to the encoder output (audio-KD), and the other two work on the decoder output, either directly on the token-level (tok-KD) or on the sequence-level (seq-KD) distributions. We show that the seq-KD substantially improves all the performance metrics, and its combination with the audio-KD further decreases the average WER and enhances the entity prediction metric.<br />Comment: Accepted at INTERSPEECH 2023. Code (will be) available at https://github.com/umbertocappellazzo/SLURP-SeqKD

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2305.13899
Document Type :
Working Paper