Back to Search Start Over

WaveFit: An Iterative and Non-autoregressive Neural Vocoder based on Fixed-Point Iteration

Authors :
Koizumi, Yuma
Yatabe, Kohei
Zen, Heiga
Bacchiani, Michiel
Publication Year :
2022

Abstract

Denoising diffusion probabilistic models (DDPMs) and generative adversarial networks (GANs) are popular generative models for neural vocoders. The DDPMs and GANs can be characterized by the iterative denoising framework and adversarial training, respectively. This study proposes a fast and high-quality neural vocoder called \textit{WaveFit}, which integrates the essence of GANs into a DDPM-like iterative framework based on fixed-point iteration. WaveFit iteratively denoises an input signal, and trains a deep neural network (DNN) for minimizing an adversarial loss calculated from intermediate outputs at all iterations. Subjective (side-by-side) listening tests showed no statistically significant differences in naturalness between human natural speech and those synthesized by WaveFit with five iterations. Furthermore, the inference speed of WaveFit was more than 240 times faster than WaveRNN. Audio demos are available at \url{google.github.io/df-conformer/wavefit/}.<br />Comment: Accepted to IEEE SLT 2022

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2210.01029
Document Type :
Working Paper