Back to Search Start Over

MAMGD: Gradient-Based Optimization Method Using Exponential Decay

Authors :
Nikita Sakovich
Dmitry Aksenov
Ekaterina Pleshakova
Sergey Gataullin
Source :
Technologies, Vol 12, Iss 9, p 154 (2024)
Publication Year :
2024
Publisher :
MDPI AG, 2024.

Abstract

Optimization methods, namely, gradient optimization methods, are a key part of neural network training. In this paper, we propose a new gradient optimization method using exponential decay and the adaptive learning rate using a discrete second-order derivative of gradients. The MAMGD optimizer uses an adaptive learning step, exponential smoothing and gradient accumulation, parameter correction, and some discrete analogies from classical mechanics. The experiments included minimization of multivariate real functions, function approximation using multilayer neural networks, and training neural networks on popular classification and regression datasets. The experimental results of the new optimization technology showed a high convergence speed, stability to fluctuations, and an accumulation of gradient accumulators. The research methodology is based on the quantitative performance analysis of the algorithm by conducting computational experiments on various optimization problems and comparing it with existing methods.

Details

Language :
English
ISSN :
22277080
Volume :
12
Issue :
9
Database :
Directory of Open Access Journals
Journal :
Technologies
Publication Type :
Academic Journal
Accession number :
edsdoj.2a833bee47ef8cfbd1669c0055ca
Document Type :
article
Full Text :
https://doi.org/10.3390/technologies12090154