Back to Search Start Over

On the Convergence of A Class of Adam-Type Algorithms for Non-Convex Optimization

Authors :
Chen, Xiangyi
Liu, Sijia
Sun, Ruoyu
Hong, Mingyi
Publication Year :
2018

Abstract

This paper studies a class of adaptive gradient based momentum algorithms that update the search directions and learning rates simultaneously using past gradients. This class, which we refer to as the "Adam-type", includes the popular algorithms such as the Adam, AMSGrad and AdaGrad. Despite their popularity in training deep neural networks, the convergence of these algorithms for solving nonconvex problems remains an open question. This paper provides a set of mild sufficient conditions that guarantee the convergence for the Adam-type methods. We prove that under our derived conditions, these methods can achieve the convergence rate of order $O(\log{T}/\sqrt{T})$ for nonconvex stochastic optimization. We show the conditions are essential in the sense that violating them may make the algorithm diverge. Moreover, we propose and analyze a class of (deterministic) incremental adaptive gradient algorithms, which has the same $O(\log{T}/\sqrt{T})$ convergence rate. Our study could also be extended to a broader class of adaptive gradient methods in machine learning and optimization.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1808.02941
Document Type :
Working Paper