Back to Search
Start Over
Non-convex learning via Stochastic Gradient Langevin Dynamics: a nonasymptotic analysis
- Publication Year :
- 2017
-
Abstract
- Stochastic Gradient Langevin Dynamics (SGLD) is a popular variant of Stochastic Gradient Descent, where properly scaled isotropic Gaussian noise is added to an unbiased estimate of the gradient at each iteration. This modest change allows SGLD to escape local minima and suffices to guarantee asymptotic convergence to global minimizers for sufficiently regular non-convex objectives (Gelfand and Mitter, 1991). The present work provides a nonasymptotic analysis in the context of non-convex learning problems, giving finite-time guarantees for SGLD to find approximate minimizers of both empirical and population risks. As in the asymptotic setting, our analysis relates the discrete-time SGLD Markov chain to a continuous-time diffusion process. A new tool that drives the results is the use of weighted transportation cost inequalities to quantify the rate of convergence of SGLD to a stationary distribution in the Euclidean $2$-Wasserstein distance.<br />Comment: 29 pages
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.1702.03849
- Document Type :
- Working Paper