Back to Search Start Over

Articulation GAN: Unsupervised modeling of articulatory learning

Authors :
Beguš, Gašper
Zhou, Alan
Wu, Peter
Anumanchipalli, Gopala K
Source :
ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing
Publication Year :
2022

Abstract

Generative deep neural networks are widely used for speech synthesis, but most existing models directly generate waveforms or spectral outputs. Humans, however, produce speech by controlling articulators, which results in the production of speech sounds through physical properties of sound propagation. We introduce the Articulatory Generator to the Generative Adversarial Network paradigm, a new unsupervised generative model of speech production/synthesis. The Articulatory Generator more closely mimics human speech production by learning to generate articulatory representations (electromagnetic articulography or EMA) in a fully unsupervised manner. A separate pre-trained physical model (ema2wav) then transforms the generated EMA representations to speech waveforms, which get sent to the Discriminator for evaluation. Articulatory analysis suggests that the network learns to control articulators in a similar manner to humans during speech production. Acoustic analysis of the outputs suggests that the network learns to generate words that are both present and absent in the training distribution. We additionally discuss implications of articulatory representations for cognitive models of human language and speech technology in general.<br />Comment: ICASSP 2023

Details

Database :
arXiv
Journal :
ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing
Publication Type :
Report
Accession number :
edsarx.2210.15173
Document Type :
Working Paper
Full Text :
https://doi.org/10.1109/ICASSP49357.2023.10096800