Back to Search Start Over

Variational autoencoders in the presence of low-dimensional data: landscape and implicit bias

Authors :
Koehler, Frederic
Mehta, Viraj
Zhou, Chenghui
Risteski, Andrej
Publication Year :
2021
Publisher :
arXiv, 2021.

Abstract

Variational Autoencoders are one of the most commonly used generative models, particularly for image data. A prominent difficulty in training VAEs is data that is supported on a lower-dimensional manifold. Recent work by Dai and Wipf (2020) proposes a two-stage training algorithm for VAEs, based on a conjecture that in standard VAE training the generator will converge to a solution with 0 variance which is correctly supported on the ground truth manifold. They gave partial support for that conjecture by showing that some optima of the VAE loss do satisfy this property, but did not analyze the training dynamics. In this paper, we show that for linear encoders/decoders, the conjecture is true-that is the VAE training does recover a generator with support equal to the ground truth manifold-and does so due to an implicit bias of gradient descent rather than merely the VAE loss itself. In the nonlinear case, we show that VAE training frequently learns a higher-dimensional manifold which is a superset of the ground truth manifold.<br />Comment: Accepted as a conference paper at ICLR 2022

Details

Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....7579f3a61b785ed3484681e8f7e8f7d3
Full Text :
https://doi.org/10.48550/arxiv.2112.06868