Back to Search Start Over

Understanding Optimization in Deep Learning with Central Flows

Authors :
Cohen, Jeremy M.
Damian, Alex
Talwalkar, Ameet
Kolter, Zico
Lee, Jason D.
Publication Year :
2024

Abstract

Optimization in deep learning remains poorly understood, even in the simple setting of deterministic (i.e. full-batch) training. A key difficulty is that much of an optimizer's behavior is implicitly determined by complex oscillatory dynamics, referred to as the "edge of stability." The main contribution of this paper is to show that an optimizer's implicit behavior can be explicitly captured by a "central flow:" a differential equation which models the time-averaged optimization trajectory. We show that these flows can empirically predict long-term optimization trajectories of generic neural networks with a high degree of numerical accuracy. By interpreting these flows, we reveal for the first time 1) the precise sense in which RMSProp adapts to the local loss landscape, and 2) an "acceleration via regularization" mechanism, wherein adaptive optimizers implicitly navigate towards low-curvature regions in which they can take larger steps. This mechanism is key to the efficacy of these adaptive optimizers. Overall, we believe that central flows constitute a promising tool for reasoning about optimization in deep learning.<br />Comment: first two authors contributed equally; author order determined by coin flip

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.24206
Document Type :
Working Paper