Back to Search
Start Over
Two-Stage Pre-Training for Sequence to Sequence Speech Recognition
- Source :
- IJCNN
- Publication Year :
- 2021
- Publisher :
- IEEE, 2021.
-
Abstract
- The attention-based encoder-decoder structure is popular in automatic speech recognition (ASR). However, it relies heavily on transcribed data. In this paper, we propose a novel pre-training strategy for the encoder-decoder sequence-to-sequence (seq2seq) model by utilizing unpaired speech and transcripts. The pre-training process consists of two stages, acoustic pre-training and linguistic pre-training. In the acoustic pre-training stage, we use a large amount of speech to pre-train the encoder by predicting masked speech feature chunks with their contexts. In the linguistic pre-training stage, we first generate synthesized speech from a large number of transcripts using a text-to-speech (TTS) system and then use the synthesized paired data to pre-train the decoder. The two-stage pre-training is conducted on the AISHELL-2 dataset, and we apply this pre-trained model to multiple subsets of AISHELL-1 and HKUST for post-training. As the size of the subset increases, we obtain relative character error rate reduction (CERR) from 38.24% to 7.88% on AISHELL-1 and from 12.00% to 1.20% on HKUST.
Details
- Database :
- OpenAIRE
- Journal :
- 2021 International Joint Conference on Neural Networks (IJCNN)
- Accession number :
- edsair.doi...........c735f42c5cba6eea4e6eebb0b305ca17
- Full Text :
- https://doi.org/10.1109/ijcnn52387.2021.9534170