Back to Search Start Over

Comparison of Multilingual Self-Supervised and Weakly-Supervised Speech Pre-Training for Adaptation to Unseen Languages

Authors :
Rouditchenko, Andrew
Khurana, Sameer
Thomas, Samuel
Feris, Rogerio
Karlinsky, Leonid
Kuehne, Hilde
Harwath, David
Kingsbury, Brian
Glass, James
Publication Year :
2023

Abstract

Recent models such as XLS-R and Whisper have made multilingual speech technologies more accessible by pre-training on audio from around 100 spoken languages each. However, there are thousands of spoken languages worldwide, and adapting to new languages is an important problem. In this work, we aim to understand which model adapts better to languages unseen during pre-training. We fine-tune both models on 13 unseen languages and 18 seen languages. Our results show that the number of hours seen per language and language family during pre-training is predictive of how the models compare, despite the significant differences in the pre-training methods.<br />Comment: Accepted at Interspeech 2023

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2305.12606
Document Type :
Working Paper