Back to Search Start Over

Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling

Authors :
Srivastava, Akash
Bansal, Yamini
Ding, Yukun
Hurwitz, Cole Lincoln
Xu, Kai
Egger, Bernhard
Sattigeri, Prasanna
Tenenbaum, Joshua B.
Le, Phuong
R, Arun Prakash
Zhou, Nengfeng
Vaughan, Joel
Wang, Yaquan
Bhattacharyya, Anwesha
Greenewald, Kristjan
Cox, David D.
Gutfreund, Dan
Publication Year :
2020

Abstract

Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the (aggregate) posterior to encourage statistical independence of the latent factors. This approach introduces a trade-off between disentangled representation learning and reconstruction quality since the model does not have enough capacity to learn correlated latent variables that capture detail information present in most image data. To overcome this trade-off, we present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method; then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables, adding detail information while maintaining conditioning on the previously learned disentangled factors. Taken together, our multi-stage modelling approach results in a single, coherent probabilistic model that is theoretically justified by the principal of D-separation and can be realized with a variety of model classes including likelihood-based models such as variational autoencoders, implicit models such as generative adversarial networks, and tractable models like normalizing flows or mixtures of Gaussians. We demonstrate that our multi-stage model has higher reconstruction quality than current state-of-the-art methods with equivalent disentanglement performance across multiple standard benchmarks. In addition, we apply the multi-stage model to generate synthetic tabular datasets, showcasing an enhanced performance over benchmark models across a variety of metrics. The interpretability analysis further indicates that the multi-stage model can effectively uncover distinct and meaningful features of variations from which the original distribution can be recovered.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2010.13187
Document Type :
Working Paper