Back to Search
Start Over
Normal approximation of Random Gaussian Neural Networks
- Publication Year :
- 2023
-
Abstract
- In this paper we provide explicit upper bounds on some distances between the (law of the) output of a random Gaussian NN and (the law of) a random Gaussian vector. Our results concern both shallow random Gaussian neural networks with univariate output and fully connected and deep random Gaussian neural networks, with a rather general activation function. The upper bounds show how the widths of the layers, the activation functions and other architecture parameters affect the Gaussian approximation of the ouput. Our techniques, relying on Stein's method and integration by parts formulas for the Gaussian law, yield estimates on distances which are indeed integral probability metrics, and include the total variation and the convex distances. These latter metrics are defined by testing against indicator functions of suitable measurable sets, and so allow for accurate estimates of the probability that the output is localized in some region of the space. Such estimates have a significant interest both from a practitioner's and a theorist's perspective.
- Subjects :
- Mathematics - Probability
Mathematics - Analysis of PDEs
60F05, 68T07
Subjects
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2307.04486
- Document Type :
- Working Paper