Back to Search Start Over

Deep Reinforcement Learning for Articulatory Synthesis in a Vowel-to-Vowel Imitation Task.

Authors :
Shitov D
Pirogova E
Wysocki TA
Lech M
Source :
Sensors (Basel, Switzerland) [Sensors (Basel)] 2023 Mar 24; Vol. 23 (7). Date of Electronic Publication: 2023 Mar 24.
Publication Year :
2023

Abstract

Articulatory synthesis is one of the approaches used for modeling human speech production. In this study, we propose a model-based algorithm for learning the policy to control the vocal tract of the articulatory synthesizer in a vowel-to-vowel imitation task. Our method does not require external training data, since the policy is learned through interactions with the vocal tract model. To improve the sample efficiency of the learning, we trained the model of speech production dynamics simultaneously with the policy. The policy was trained in a supervised way using predictions of the model of speech production dynamics. To stabilize the training, early stopping was incorporated into the algorithm. Additionally, we extracted acoustic features using an acoustic word embedding (AWE) model. This model was trained to discriminate between different words and to enable compact encoding of acoustics while preserving contextual information of the input. Our preliminary experiments showed that introducing this AWE model was crucial to guide the policy toward a near-optimal solution. The acoustic embeddings, obtained using the proposed approach, were revealed to be useful when applied as inputs to the policy and the model of speech production dynamics.

Details

Language :
English
ISSN :
1424-8220
Volume :
23
Issue :
7
Database :
MEDLINE
Journal :
Sensors (Basel, Switzerland)
Publication Type :
Academic Journal
Accession number :
37050496
Full Text :
https://doi.org/10.3390/s23073437