Back to Search Start Over

Non-convergence of stochastic gradient descent in the training of deep neural networks.

Authors :
Cheridito, Patrick
Jentzen, Arnulf
Rossmannek, Florian
Source :
Journal of Complexity. Jun2021, Vol. 64, pN.PAG-N.PAG. 1p.
Publication Year :
2021

Abstract

Deep neural networks have successfully been trained in various application areas with stochastic gradient descent. However, there exists no rigorous mathematical explanation why this works so well. The training of neural networks with stochastic gradient descent has four different discretization parameters: (i) the network architecture; (ii) the amount of training data; (iii) the number of gradient steps; and (iv) the number of randomly initialized gradient trajectories. While it can be shown that the approximation error converges to zero if all four parameters are sent to infinity in the right order, we demonstrate in this paper that stochastic gradient descent fails to converge for ReLU networks if their depth is much larger than their width and the number of random initializations does not increase to infinity fast enough. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
0885064X
Volume :
64
Database :
Academic Search Index
Journal :
Journal of Complexity
Publication Type :
Academic Journal
Accession number :
149369980
Full Text :
https://doi.org/10.1016/j.jco.2020.101540