Back to Search
Start Over
How Diffusion Models Learn to Factorize and Compose
- Source :
- Advances in Neural Information Processing Systems 2024
- Publication Year :
- 2024
-
Abstract
- Diffusion models are capable of generating photo-realistic images that combine elements which likely do not appear together in the training set, demonstrating the ability to \textit{compositionally generalize}. Nonetheless, the precise mechanism of compositionality and how it is acquired through training remains elusive. Inspired by cognitive neuroscientific approaches, we consider a highly reduced setting to examine whether and when diffusion models learn semantically meaningful and factorized representations of composable features. We performed extensive controlled experiments on conditional Denoising Diffusion Probabilistic Models (DDPMs) trained to generate various forms of 2D Gaussian bump images. We found that the models learn factorized but not fully continuous manifold representations for encoding continuous features of variation underlying the data. With such representations, models demonstrate superior feature compositionality but limited ability to interpolate over unseen values of a given feature. Our experimental results further demonstrate that diffusion models can attain compositionality with few compositional examples, suggesting a more efficient way to train DDPMs. Finally, we connect manifold formation in diffusion models to percolation theory in physics, offering insight into the sudden onset of factorized representation learning. Our thorough toy experiments thus contribute a deeper understanding of how diffusion models capture compositional structure in data.<br />Comment: 11 pages, 6 figures, plus appendix, some content overlap with arXiv:2402.03305
Details
- Database :
- arXiv
- Journal :
- Advances in Neural Information Processing Systems 2024
- Publication Type :
- Report
- Accession number :
- edsarx.2408.13256
- Document Type :
- Working Paper