Back to Search
Start Over
Appropriate Learning Rates of Adaptive Learning Rate Optimization Algorithms for Training Deep Neural Networks
- Source :
- IEEE Transactions on Cybernetics. 52:13250-13261
- Publication Year :
- 2022
- Publisher :
- Institute of Electrical and Electronics Engineers (IEEE), 2022.
-
Abstract
- This paper deals with nonconvex stochastic optimization problems in deep learning and provides appropriate learning rates with which adaptive learning rate optimization algorithms, such as Adam and AMSGrad, can approximate a stationary point of the problem. In particular, constant and diminishing learning rates are provided to approximate a stationary point of the problem. Our results also guarantee that the adaptive learning rate optimization algorithms can approximate global minimizers of convex stochastic optimization problems. The adaptive learning rate optimization algorithms are examined in numerical experiments on text and image classification. The experiments show that the algorithms with constant learning rates perform better than ones with diminishing learning rates.
- Subjects :
- Computer Science::Machine Learning
Mathematical optimization
Computer science
Convergence (routing)
FOS: Mathematics
Electrical and Electronic Engineering
Mathematics - Optimization and Control
Contextual image classification
Optimization algorithm
business.industry
Deep learning
Training (meteorology)
65K05, 90C25, 90C90, 92B20
Stationary point
Computer Science Applications
Human-Computer Interaction
Optimization and Control (math.OC)
Control and Systems Engineering
Stochastic optimization
Neural Networks, Computer
Artificial intelligence
Constant (mathematics)
business
Algorithms
Software
Information Systems
Subjects
Details
- ISSN :
- 21682275 and 21682267
- Volume :
- 52
- Database :
- OpenAIRE
- Journal :
- IEEE Transactions on Cybernetics
- Accession number :
- edsair.doi.dedup.....5839538a20fc37c9813f46caeebe5639