Back to Search Start Over

SNIPER Training: Single-Shot Sparse Training for Text-to-Speech

Authors :
Lam, Perry
Zhang, Huayun
Chen, Nancy F.
Sisman, Berrak
Herremans, Dorien
Publication Year :
2022

Abstract

Text-to-speech (TTS) models have achieved remarkable naturalness in recent years, yet like most deep neural models, they have more parameters than necessary. Sparse TTS models can improve on dense models via pruning and extra retraining, or converge faster than dense models with some performance loss. Thus, we propose training TTS models using decaying sparsity, i.e. a high initial sparsity to accelerate training first, followed by a progressive rate reduction to obtain better eventual performance. This decremental approach differs from current methods of incrementing sparsity to a desired target, which costs significantly more time than dense training. We call our method SNIPER training: Single-shot Initialization Pruning Evolving-Rate training. Our experiments on FastSpeech2 show that we were able to obtain better losses in the first few training epochs with SNIPER, and that the final SNIPER-trained models outperformed constant-sparsity models and edged out dense models, with negligible difference in training time.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2211.07283
Document Type :
Working Paper