Back to Search Start Over

Vec-Tok Speech: speech vectorization and tokenization for neural speech generation

Authors :
Zhu, Xinfa
Lv, Yuanjun
Lei, Yi
Li, Tao
He, Wendi
Zhou, Hongbin
Lu, Heng
Xie, Lei
Zhu, Xinfa
Lv, Yuanjun
Lei, Yi
Li, Tao
He, Wendi
Zhou, Hongbin
Lu, Heng
Xie, Lei
Publication Year :
2023

Abstract

Language models (LMs) have recently flourished in natural language processing and computer vision, generating high-fidelity texts or images in various tasks. In contrast, the current speech generative models are still struggling regarding speech quality and task generalization. This paper presents Vec-Tok Speech, an extensible framework that resembles multiple speech generation tasks, generating expressive and high-fidelity speech. Specifically, we propose a novel speech codec based on speech vectors and semantic tokens. Speech vectors contain acoustic details contributing to high-fidelity speech reconstruction, while semantic tokens focus on the linguistic content of speech, facilitating language modeling. Based on the proposed speech codec, Vec-Tok Speech leverages an LM to undertake the core of speech generation. Moreover, Byte-Pair Encoding (BPE) is introduced to reduce the token length and bit rate for lower exposure bias and longer context coverage, improving the performance of LMs. Vec-Tok Speech can be used for intra- and cross-lingual zero-shot voice conversion (VC), zero-shot speaking style transfer text-to-speech (TTS), speech-to-speech translation (S2ST), speech denoising, and speaker de-identification and anonymization. Experiments show that Vec-Tok Speech, built on 50k hours of speech, performs better than other SOTA models. Code will be available at https://github.com/BakerBunker/VecTok .<br />Comment: 15 pages, 2 figures

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1438487781
Document Type :
Electronic Resource