Back to Search Start Over

Quadratic number of nodes is sufficient to learn a dataset via gradient descent

Authors :
Das, Biswarup
Golikov, Eugene. A.
Publication Year :
2019

Abstract

We prove that if an activation function satisfies some mild conditions and number of neurons in a two-layered fully connected neural network with this activation function is beyond a certain threshold, then gradient descent on quadratic loss function finds the optimal weights of input layer for global minima in linear time. This threshold value is an improvement over previously obtained values. We hypothesise that this bound cannot be improved by the method we are using in this work.<br />Comment: Machine learning using neural networks, gradient descent, optimization, overparametrization regime

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1911.05402
Document Type :
Working Paper