Back to Search Start Over

Jamming transition as a paradigm to understand the loss landscape of deep neural networks

Authors :
Marco Baity-Jesi
Stéphane d'Ascoli
Mario Geiger
Stefano Spigler
Giulio Biroli
Matthieu Wyart
Levent Sagun
Ecole Polytechnique Fédérale de Lausanne (EPFL)
Laboratoire de Physique Statistique de l'ENS (LPS)
Fédération de recherche du Département de physique de l'Ecole Normale Supérieure - ENS Paris (FRDPENS)
École normale supérieure - Paris (ENS-PSL)
Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL)-Centre National de la Recherche Scientifique (CNRS)-École normale supérieure - Paris (ENS-PSL)
Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL)-Centre National de la Recherche Scientifique (CNRS)-Université Paris Diderot - Paris 7 (UPD7)-Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS)
Institut de Physique Théorique - UMR CNRS 3681 (IPHT)
Commissariat à l'énergie atomique et aux énergies alternatives (CEA)-Université Paris-Saclay-Centre National de la Recherche Scientifique (CNRS)
Systèmes Désordonnés et Applications
Laboratoire de physique de l'ENS - ENS Paris (LPENS (UMR_8023))
Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL)-Université Paris Diderot - Paris 7 (UPD7)-Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS)-École normale supérieure - Paris (ENS-PSL)
Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL)-Université Paris Diderot - Paris 7 (UPD7)-Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS)
Centre National de la Recherche Scientifique (CNRS)-École normale supérieure - Paris (ENS Paris)
Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL)-Centre National de la Recherche Scientifique (CNRS)-École normale supérieure - Paris (ENS Paris)
École normale supérieure - Paris (ENS Paris)
Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL)-Centre National de la Recherche Scientifique (CNRS)-Sorbonne Université (SU)-Université Paris Diderot - Paris 7 (UPD7)-École normale supérieure - Paris (ENS Paris)
Université Paris sciences et lettres (PSL)-Université Paris sciences et lettres (PSL)-Centre National de la Recherche Scientifique (CNRS)-Sorbonne Université (SU)-Université Paris Diderot - Paris 7 (UPD7)
Source :
Physical Review E, Physical Review E, 2019, 100 (1), pp.012115. ⟨10.1103/PhysRevE.100.012115⟩, Physical Review E, American Physical Society (APS), 2019, 100 (1), pp.012115. ⟨10.1103/PhysRevE.100.012115⟩
Publication Year :
2019
Publisher :
HAL CCSD, 2019.

Abstract

Deep learning has been immensely successful at a variety of tasks, ranging from classification to artificial intelligence. Learning corresponds to fitting training data, which is implemented by descending a very high-dimensional loss function. Understanding under which conditions neural networks do not get stuck in poor minima of the loss, and how the landscape of that loss evolves as depth is increased, remains a challenge. Here we predict, and test empirically, an analogy between this landscape and the energy landscape of repulsive ellipses. We argue that in fully connected deep networks a phase transition delimits the over- and underparametrized regimes where fitting can or cannot be achieved. In the vicinity of this transition, properties of the curvature of the minima of the loss (the spectrum of the Hessian) are critical. This transition shares direct similarities with the jamming transition by which particles form a disordered solid as the density is increased, which also occurs in certain classes of computational optimization and learning problems such as the perceptron. Our analysis gives a simple explanation as to why poor minima of the loss cannot be encountered in the overparametrized regime. Interestingly, we observe that the ability of fully connected networks to fit random data is independent of their depth, an independence that appears to also hold for real data. We also study a quantity Delta which characterizes how well (Delta < 0) or badly (Delta > 0) a datum is learned. At the critical point it is power-law distributed on several decades, P+(Delta) similar to Delta(theta) for Delta > 0 and P_(Delta) similar to (-Delta)(-gamma) for Delta < 0, with exponents that depend on the choice of activation function. This observation suggests that near the transition the loss landscape has a hierarchical structure and that the learning dynamics is prone to avalanche-like dynamics, with abrupt changes in the set of patterns that are learned.

Details

Language :
English
ISSN :
24700045 and 24700053
Database :
OpenAIRE
Journal :
Physical Review E, Physical Review E, 2019, 100 (1), pp.012115. ⟨10.1103/PhysRevE.100.012115⟩, Physical Review E, American Physical Society (APS), 2019, 100 (1), pp.012115. ⟨10.1103/PhysRevE.100.012115⟩
Accession number :
edsair.doi.dedup.....9db6a7f683862dabd9116b5f8a9b9c67