Back to Search Start Over

A Theoretical and Empirical Study on the Convergence of Adam with an 'Exact' Constant Step Size in Non-Convex Settings

Authors :
Mazumder, Alokendu
Sabharwal, Rishabh
Tayal, Manan
Kumar, Bhartendu
Rathore, Punit
Publication Year :
2023

Abstract

In neural network training, RMSProp and Adam remain widely favoured optimisation algorithms. One of the keys to their performance lies in selecting the correct step size, which can significantly influence their effectiveness. Additionally, questions about their theoretical convergence properties continue to be a subject of interest. In this paper, we theoretically analyse a constant step size version of Adam in the non-convex setting and discuss why it is important for the convergence of Adam to use a fixed step size. This work demonstrates the derivation and effective implementation of a constant step size for Adam, offering insights into its performance and efficiency in non convex optimisation scenarios. (i) First, we provide proof that these adaptive gradient algorithms are guaranteed to reach criticality for smooth non-convex objectives with constant step size, and we give bounds on the running time. Both deterministic and stochastic versions of Adam are analysed in this paper. We show sufficient conditions for the derived constant step size to achieve asymptotic convergence of the gradients to zero with minimal assumptions. Next, (ii) we design experiments to empirically study Adam's convergence with our proposed constant step size against stateof the art step size schedulers on classification tasks. Lastly, (iii) we also demonstrate that our derived constant step size has better abilities in reducing the gradient norms, and empirically, we show that despite the accumulation of a few past gradients, the key driver for convergence in Adam is the non-increasing step sizes.<br />Comment: 48 pages including proofs and extended experiments

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2309.08339
Document Type :
Working Paper