Back to Search Start Over

Towards Robust Speech Representation Learning for Thousands of Languages

Authors :
Chen, William
Zhang, Wangyou
Peng, Yifan
Li, Xinjian
Tian, Jinchuan
Shi, Jiatong
Chang, Xuankai
Maiti, Soumi
Livescu, Karen
Watanabe, Shinji
Publication Year :
2024

Abstract

Self-supervised learning (SSL) has helped extend speech technologies to more languages by reducing the need for labeled data. However, models are still far from supporting the world's 7000+ languages. We propose XEUS, a Cross-lingual Encoder for Universal Speech, trained on over 1 million hours of data across 4057 languages, extending the language coverage of SSL models 4-fold. We combine 1 million hours of speech from existing publicly accessible corpora with a newly created corpus of 7400+ hours from 4057 languages, which will be publicly released. To handle the diverse conditions of multilingual speech data, we augment the typical SSL masked prediction approach with a novel dereverberation objective, increasing robustness. We evaluate XEUS on several benchmarks, and show that it consistently outperforms or achieves comparable results to state-of-the-art (SOTA) SSL models across a variety of tasks. XEUS sets a new SOTA on the ML-SUPERB benchmark: it outperforms MMS 1B and w2v-BERT 2.0 v2 by 0.8% and 4.4% respectively, despite having less parameters or pre-training data. Checkpoints, code, and data are found in https://www.wavlab.org/activities/2024/xeus/.<br />Comment: Updated affiliations; 20 pages

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.00837
Document Type :
Working Paper