1. Distributed learning in non-convex environments-Part I: agreement at a linear rate
- Author
-
Ali H. Sayed and Stefan Vlaski
- Subjects
Mathematical optimization ,Technology ,Optimization problem ,Computer science ,Stochastic optimization ,02 engineering and technology ,adaptation ,non-convex cost ,Engineering ,Saddle point ,0202 electrical engineering, electronic engineering, information engineering ,stationary points ,Leverage (statistics) ,Electrical and Electronic Engineering ,Science & Technology ,math.OC ,Stochastic process ,eess.SP ,020206 networking & telecommunications ,Engineering, Electrical & Electronic ,Stationary point ,gradient noise ,Signal Processing ,diffusion learning ,Networking & Telecommunications ,distributed optimization ,cs.MA - Abstract
Driven by the need to solve increasingly complex optimization problems in signal processing and machine learning, there has been increasing interest in understanding the behavior of gradient-descent algorithms in non-convex environments. Most available works on distributed non-convex optimization problems focus on the deterministic setting where exact gradients are available at each agent. In this work and its Part II, we consider stochastic cost functions, where exact gradients are replaced by stochastic approximations and the resulting gradient noise persistently seeps into the dynamics of the algorithm. We establish that the diffusion learning strategy continues to yield meaningful estimates non-convex scenarios in the sense that the iterates by the individual agents will cluster in a small region around the network centroid. We use this insight to motivate a short-term model for network evolution over a finite-horizon. In Part II of this work, we leverage this model to establish descent of the diffusion strategy through saddle points in $O(1/\mu)$ steps, where $\mu$ denotes the step-size, and the return of approximately second-order stationary points in a polynomial number of iterations.
- Published
- 2021