Back to Search Start Over

Estimating Underlying Articulatory Targets of Thai Vowels by Using Deep Learning Based on Generating Synthetic Samples From a 3D Vocal Tract Model and Data Augmentation

Authors :
Thanat Lapthawan
Santitham Prom-On
Peter Birkholz
Yi Xu
Source :
IEEE Access, Vol 10, Pp 41489-41502 (2022)
Publication Year :
2022
Publisher :
IEEE, 2022.

Abstract

Representation learning is one of the fundamental issues in modeling articulatory-based speech synthesis using target-driven models. This paper proposes a computational strategy for learning underlying articulatory targets from a 3D articulatory speech synthesis model using a bi-directional long short-term memory recurrent neural network based on a small set of representative seed samples. Using a seeding set from VocalTractLab, a larger training set was generated that provided richer contextual variations for the model to learn. The deep learning model for acoustic-to-target mapping was then trained to model the inverse relation of the articulation process. This method allows the trained model to map the given acoustic data onto the articulatory target parameters which can then be used to identify the distribution based on linguistic contexts. The model was evaluated based on its effectiveness in mapping acoustics to articulation, and the perceptual accuracy of speech reproduced from the articulation estimated from the recorded speech by native Thai speakers. The model achieved more than 80% phoneme classification accuracy in the listening test conducted with 25 native Thai speakers. The results indicate that the model can accurately imitate speech with a high degree of phonemic precision.

Details

Language :
English
ISSN :
21693536
Volume :
10
Database :
Directory of Open Access Journals
Journal :
IEEE Access
Publication Type :
Academic Journal
Accession number :
edsdoj.8a31e64fc52b4e6499020ae89d54983a
Document Type :
article
Full Text :
https://doi.org/10.1109/ACCESS.2022.3166922