Back to Search Start Over

The Hessian Penalty: A Weak Prior for Unsupervised Disentanglement

Authors :
Peebles, William
Peebles, John
Zhu, Jun-Yan
Efros, Alexei
Torralba, Antonio
Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Source :
arXiv
Publication Year :
2020
Publisher :
Springer International Publishing, 2020.

Abstract

Part of the Lecture Notes in Computer Science book series (LNCS, volume 12351)<br />Existing disentanglement methods for deep generative models rely on hand-picked priors and complex encoder-based architectures. In this paper, we propose the Hessian Penalty, a simple regularization term that encourages the Hessian of a generative model with respect to its input to be diagonal. We introduce a model-agnostic, unbiased stochastic approximation of this term based on Hutchinson’s estimator to compute it efficiently during training. Our method can be applied to a wide range of deep generators with just a few lines of code. We show that training with the Hessian Penalty often causes axis-aligned disentanglement to emerge in latent space when applied to ProGAN on several datasets. Additionally, we use our regularization term to identify interpretable directions in BigGAN’s latent space in an unsupervised fashion. Finally, we provide empirical evidence that the Hessian Penalty encourages substantial shrinkage when applied to over-parameterized latent spaces. We encourage readers to view videos of our disentanglement results at www.wpeebles.com/hessian-penalty, and code at https://github.com/wpeebles/hessian_penalty.

Details

Language :
English
Database :
OpenAIRE
Journal :
arXiv
Accession number :
edsair.od........88..b8514e9be64d40943f37eae3f60f7657