Back to Search Start Over

A Universal Bert-Based Front-End Model for Mandarin Text-To-Speech Synthesis

Authors :
Beibei Hu
Zilong Bai
Source :
ICASSP
Publication Year :
2021
Publisher :
IEEE, 2021.

Abstract

The front-end text processing module is considered as an essential part that influences the intelligibility and naturalness of a Mandarin text-to-speech system significantly. For commercial text-to-speech systems, the Mandarin front-end should meet the requirements of high accuracy and low time latency while also ensuring maintainability. In this paper, we propose a universal BERT-based model that can be used for various tasks in the Mandarin front-end without changing its architecture. The feature extractor and classifiers in the model are shared for several sub-tasks, which improves the expandability and maintainability. We trained and evaluated the model with polyphone disambiguation, text normalization, and prosodic boundary prediction for single task modules and multi-task learning. Results show that, the model maintains high performance for single task modules and shows higher accuracy and lower time latency for multi-task modules, indicating that the proposed universal front-end model is promising as a maintainable Mandarin front-end for commercial applications.

Details

Database :
OpenAIRE
Journal :
ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
Accession number :
edsair.doi...........0269cc1dccab964b4c1e5f59f57ff629
Full Text :
https://doi.org/10.1109/icassp39728.2021.9414935