1. k2SSL: A Faster and Better Framework for Self-Supervised Speech Representation Learning
- Author
-
Yang, Yifan, Zhuo, Jianheng, Jin, Zengrui, Ma, Ziyang, Yang, Xiaoyu, Yao, Zengwei, Guo, Liyong, Kang, Wei, Kuang, Fangjun, Lin, Long, Povey, Daniel, and Chen, Xie
- Subjects
Electrical Engineering and Systems Science - Audio and Speech Processing - Abstract
Self-supervised learning (SSL) has achieved great success in speech-related tasks, driven by advancements in speech encoder architectures and the expansion of datasets. While Transformer and Conformer architectures have dominated SSL backbones, encoders like Zipformer, which excel in automatic speech recognition (ASR), remain unexplored in SSL. Concurrently, inefficiencies in data processing within existing SSL training frameworks, such as fairseq, pose challenges in managing the growing volumes of training data. To address these issues, we propose k2SSL, an open-source framework that offers faster, more memory-efficient, and better-performing self-supervised speech representation learning, with a focus on downstream ASR tasks. The optimized HuBERT and proposed Zipformer-based SSL systems exhibit substantial reductions in both training time and memory usage during SSL training. Experiments on LibriSpeech and Libri-Light demonstrate that Zipformer-based SSL systems significantly outperform comparable HuBERT and WavLM systems, achieving a relative WER reduction on dev-other/test-other of up to 34.8%/32.4% compared to HuBERT Base after supervised fine-tuning, along with a 3.5x pre-training speedup in total GPU hours., Comment: Submitted to ICASSP 2025
- Published
- 2024