Back to Search Start Over

Appropriate Learning Rates of Adaptive Learning Rate Optimization Algorithms for Training Deep Neural Networks

Authors :
Hideaki Iiduka
Source :
IEEE Transactions on Cybernetics. 52:13250-13261
Publication Year :
2022
Publisher :
Institute of Electrical and Electronics Engineers (IEEE), 2022.

Abstract

This paper deals with nonconvex stochastic optimization problems in deep learning and provides appropriate learning rates with which adaptive learning rate optimization algorithms, such as Adam and AMSGrad, can approximate a stationary point of the problem. In particular, constant and diminishing learning rates are provided to approximate a stationary point of the problem. Our results also guarantee that the adaptive learning rate optimization algorithms can approximate global minimizers of convex stochastic optimization problems. The adaptive learning rate optimization algorithms are examined in numerical experiments on text and image classification. The experiments show that the algorithms with constant learning rates perform better than ones with diminishing learning rates.

Details

ISSN :
21682275 and 21682267
Volume :
52
Database :
OpenAIRE
Journal :
IEEE Transactions on Cybernetics
Accession number :
edsair.doi.dedup.....5839538a20fc37c9813f46caeebe5639