Back to Search Start Over

Enabling On-Device Training of Speech Recognition Models with Federated Dropout

Authors :
Guliani, Dhruv
Zhou, Lillian
Ryu, Changwan
Yang, Tien-Ju
Zhang, Harry
Xiao, Yonghui
Beaufays, Francoise
Motta, Giovanni
Publication Year :
2021

Abstract

Federated learning can be used to train machine learning models on the edge on local data that never leave devices, providing privacy by default. This presents a challenge pertaining to the communication and computation costs associated with clients' devices. These costs are strongly correlated with the size of the model being trained, and are significant for state-of-the-art automatic speech recognition models. We propose using federated dropout to reduce the size of client models while training a full-size model server-side. We provide empirical evidence of the effectiveness of federated dropout, and propose a novel approach to vary the dropout rate applied at each layer. Furthermore, we find that federated dropout enables a set of smaller sub-models within the larger model to independently have low word error rates, making it easier to dynamically adjust the size of the model deployed for inference.<br />Comment: \c{opyright} 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2110.03634
Document Type :
Working Paper