Back to Search
Start Over
DEEP NEURAL NETWORKS WITH RELU-SINE-EXPONENTIAL ACTIVATIONS BREAK CURSE OF DIMENSIONALITY IN APPROXIMATION ON HÖLDER CLASS.
- Source :
-
SIAM Journal on Mathematical Analysis . 2023, Vol. 55 Issue 4, p3635-3649. 15p. - Publication Year :
- 2023
-
Abstract
- In this paper, we construct neural networks with ReLU, sine, and 2x as activation functions. For a general continuous f defined on [0, 1]d with continuity modulus ωf (·), we construct ReLU-sine-2x networks that enjoy an approximation rate... where M,N ∊ N+ are the hyperparameters related to widths of the networks. As a consequence, we can construct ReLU-sine-2x network with the depth 6 and width max... that approximates... within a given tolerance ∊ > 0 measured in the Lp norm with... denotes the Hölder continuous function class defined on [0, 1]d with order α ∊ (0, 1] and constant μ > 0. Therefore, the ReLU-sine-2x networks overcome the curse of dimensionality in an approximation on Hαμ ([0, 1]d). In addition to its super expressive power, functions implemented by ReLU-sine-2x networks are (generalized) differentiable, enabling us to apply stochastic gradient descent to train. [ABSTRACT FROM AUTHOR]
- Subjects :
- *ARTIFICIAL neural networks
*HOLDER spaces
*CONTINUOUS functions
Subjects
Details
- Language :
- English
- ISSN :
- 00361410
- Volume :
- 55
- Issue :
- 4
- Database :
- Academic Search Index
- Journal :
- SIAM Journal on Mathematical Analysis
- Publication Type :
- Academic Journal
- Accession number :
- 171979541
- Full Text :
- https://doi.org/10.1137/21M144431X