Back to Search Start Over

A Methodology Establishing Linear Convergence of Adaptive Gradient Methods under PL Inequality

Authors :
Chakrabarti, Kushal
Baranwal, Mayank
Publication Year :
2024

Abstract

Adaptive gradient-descent optimizers are the standard choice for training neural network models. Despite their faster convergence than gradient-descent and remarkable performance in practice, the adaptive optimizers are not as well understood as vanilla gradient-descent. A reason is that the dynamic update of the learning rate that helps in faster convergence of these methods also makes their analysis intricate. Particularly, the simple gradient-descent method converges at a linear rate for a class of optimization problems, whereas the practically faster adaptive gradient methods lack such a theoretical guarantee. The Polyak-{\L}ojasiewicz (PL) inequality is the weakest known class, for which linear convergence of gradient-descent and its momentum variants has been proved. Therefore, in this paper, we prove that AdaGrad and Adam, two well-known adaptive gradient methods, converge linearly when the cost function is smooth and satisfies the PL inequality. Our theoretical framework follows a simple and unified approach, applicable to both batch and stochastic gradients, which can potentially be utilized in analyzing linear convergence of other variants of Adam.<br />Comment: Accepted for publication at the main track of 27th European Conference on Artificial Intelligence (ECAI-2024)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.12629
Document Type :
Working Paper