Back to Search Start Over

Lower Bounds on the Generalization Error of Nonlinear Learning Models.

Authors :
Seroussi, Inbar
Zeitouni, Ofer
Source :
IEEE Transactions on Information Theory. Dec2022, Vol. 68 Issue 12, p7956-7970. 15p.
Publication Year :
2022

Abstract

We study in this paper lower bounds for the generalization error of models derived from multi-layer neural networks, in the regime where the size of the layers is commensurate with the number of samples in the training data. We derive explicit generalization lower bounds for general biased estimators, in the cases of two-layered networks. For linear activation function, the bound is asymptotically tight. In the nonlinear case, we provide a comparison of our bounds with an empirical study of the stochastic gradient descent algorithm. In addition, we derive bounds for unbiased estimators, which show that the latter have unacceptable performance for truly nonlinear networks. The analysis uses elements from the theory of large random matrices. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00189448
Volume :
68
Issue :
12
Database :
Academic Search Index
Journal :
IEEE Transactions on Information Theory
Publication Type :
Academic Journal
Accession number :
160651280
Full Text :
https://doi.org/10.1109/TIT.2022.3189760