Back to Search
Start Over
Weighted Cross-entropy for Low-Resource Languages in Multilingual Speech Recognition
- Source :
- Proceedings of Interspeech 2024
- Publication Year :
- 2024
-
Abstract
- This paper addresses the challenge of integrating low-resource languages into multilingual automatic speech recognition (ASR) systems. We introduce a novel application of weighted cross-entropy, typically used for unbalanced datasets, to facilitate the integration of low-resource languages into pre-trained multilingual ASR models within the context of continual multilingual learning. We fine-tune the Whisper multilingual ASR model on five high-resource languages and one low-resource language, employing language-weighted dynamic cross-entropy and data augmentation. The results show a remarkable 6.69% word error rate (WER) reduction for the low-resource language compared to the fine-tuned model without applying our approach, and a 48.86% WER reduction compared to the original Whisper model. In addition, our approach yields an average WER reduction of 3.29% across the six languages, showing no degradation for the high-resource languages.<br />Comment: 5 pages, 1 figure. Presented at Interspeech 2024
Details
- Database :
- arXiv
- Journal :
- Proceedings of Interspeech 2024
- Publication Type :
- Report
- Accession number :
- edsarx.2409.16954
- Document Type :
- Working Paper
- Full Text :
- https://doi.org/10.21437/Interspeech.2024-734