Back to Search
Start Over
Iterative Compression of End-to-End ASR Model using AutoML
- Source :
- INTERSPEECH 2020
- Publication Year :
- 2020
-
Abstract
- Increasing demand for on-device Automatic Speech Recognition (ASR) systems has resulted in renewed interests in developing automatic model compression techniques. Past research have shown that AutoML-based Low Rank Factorization (LRF) technique, when applied to an end-to-end Encoder-Attention-Decoder style ASR model, can achieve a speedup of up to 3.7x, outperforming laborious manual rank-selection approaches. However, we show that current AutoML-based search techniques only work up to a certain compression level, beyond which they fail to produce compressed models with acceptable word error rates (WER). In this work, we propose an iterative AutoML-based LRF approach that achieves over 5x compression without degrading the WER, thereby advancing the state-of-the-art in ASR compression.
- Subjects :
- Computer Science - Machine Learning
Statistics - Machine Learning
Subjects
Details
- Database :
- arXiv
- Journal :
- INTERSPEECH 2020
- Publication Type :
- Report
- Accession number :
- edsarx.2008.02897
- Document Type :
- Working Paper