1. A Mean Field View of the Landscape of Two-Layers Neural Networks
- Author
-
Song Mei, Phan-Minh Nguyen, and Andrea Montanari
- Subjects
Computer Science::Machine Learning ,FOS: Computer and information sciences ,Mathematical optimization ,Computer Science - Machine Learning ,Generalization ,Computer science ,FOS: Physical sciences ,Mathematics - Statistics Theory ,Machine Learning (stat.ML) ,02 engineering and technology ,Statistics Theory (math.ST) ,01 natural sciences ,Machine Learning (cs.LG) ,Statistics::Machine Learning ,gradient flow ,Local optimum ,Simple (abstract algebra) ,Statistics - Machine Learning ,stochastic gradient descent ,0103 physical sciences ,Convergence (routing) ,0202 electrical engineering, electronic engineering, information engineering ,partial differential equations ,FOS: Mathematics ,010306 general physics ,Condensed Matter - Statistical Mechanics ,Multidisciplinary ,Partial differential equation ,Artificial neural network ,Statistical Mechanics (cond-mat.stat-mech) ,Statistics ,neural networks ,Maxima and minima ,Wasserstein space ,Stochastic gradient descent ,PNAS Plus ,Physical Sciences ,020201 artificial intelligence & image processing - Abstract
Multi-layer neural networks are among the most powerful models in machine learning, yet the fundamental reasons for this success defy mathematical understanding. Learning a neural network requires to optimize a non-convex high-dimensional objective (risk function), a problem which is usually attacked using stochastic gradient descent (SGD). Does SGD converge to a global optimum of the risk or only to a local optimum? In the first case, does this happen because local minima are absent, or because SGD somehow avoids them? In the second, why do local minima reached by SGD have good generalization properties? In this paper we consider a simple case, namely two-layers neural networks, and prove that -in a suitable scaling limit- SGD dynamics is captured by a certain non-linear partial differential equation (PDE) that we call distributional dynamics (DD). We then consider several specific examples, and show how DD can be used to prove convergence of SGD to networks with nearly ideal generalization error. This description allows to 'average-out' some of the complexities of the landscape of neural networks, and can be used to prove a general convergence result for noisy SGD., Comment: 103 pages
- Published
- 2018
- Full Text
- View/download PDF