Back to Search
Start Over
STOCHASTIC QUASI-NEWTON METHODS FOR NONCONVEX STOCHASTIC OPTIMIZATION.
- Source :
-
SIAM Journal on Optimization . 2017, Vol. 27 Issue 2, p927-956. 30p. - Publication Year :
- 2017
-
Abstract
- In this paper we study stochastic quasi-Newton methods for nonconvex stochastic optimization, where we assume that noisy information about the gradients of the objective function is available via a stochastic first-order oracle (SFO). We propose a general framework for such methods, for which we prove almost sure convergence to stationary points and analyze its worst-case iteration complexity. When a randomly chosen iterate is returned as the output of such an algorithm, we prove that in the worst case, the SFO-calls complexity is O(ε-2) to ensure that the expectation of the squared norm of the gradient is smaller than the given accuracy tolerance ε. We also propose a specific algorithm, namely, a stochastic damped limited-memory BFGS (SdLBFGS) method, that falls under the proposed framework. Moreover, we incorporate the stochastic variance reduced gradient variance reduction technique into the proposed SdLBFGS method and analyze its SFO-calls complexity. Numerical results on a nonconvex binary classification problem using a support vector machine and a multiclass classification problem using neural networks are reported. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 10526234
- Volume :
- 27
- Issue :
- 2
- Database :
- Academic Search Index
- Journal :
- SIAM Journal on Optimization
- Publication Type :
- Academic Journal
- Accession number :
- 123857591
- Full Text :
- https://doi.org/10.1137/15M1053141