Back to Search Start Over

Efficiently Train ASR Models that Memorize Less and Perform Better with Per-core Clipping

Authors :
Wang, Lun
Thakkar, Om
Meng, Zhong
Rafidi, Nicole
Prabhavalkar, Rohit
Narayanan, Arun
Wang, Lun
Thakkar, Om
Meng, Zhong
Rafidi, Nicole
Prabhavalkar, Rohit
Narayanan, Arun
Publication Year :
2024

Abstract

Gradient clipping plays a vital role in training large-scale automatic speech recognition (ASR) models. It is typically applied to minibatch gradients to prevent gradient explosion, and to the individual sample gradients to mitigate unintended memorization. This work systematically investigates the impact of a specific granularity of gradient clipping, namely per-core clip-ping (PCC), across training a wide range of ASR models. We empirically demonstrate that PCC can effectively mitigate unintended memorization in ASR models. Surprisingly, we find that PCC positively influences ASR performance metrics, leading to improved convergence rates and reduced word error rates. To avoid tuning the additional hyperparameter introduced by PCC, we further propose a novel variant, adaptive per-core clipping (APCC), for streamlined optimization. Our findings highlight the multifaceted benefits of PCC as a strategy for robust, privacy-forward ASR model training.<br />Comment: Accepted to Interspeech'24

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1438564442
Document Type :
Electronic Resource