Back to Search Start Over

SLIMTRAIN-A STOCHASTIC APPROXIMATION METHOD FOR TRAINING SEPARABLE DEEP NEURAL NETWORKS.

Authors :
NEWMAN, ELIZABETH
CHUNG, JULIANNE
CHUNG, MATTHIAS
RUTHOTTO, LARS
Source :
SIAM Journal on Scientific Computing. 2022, Vol. 44 Issue 4, pA2322-A2348. 27p.
Publication Year :
2022

Abstract

Deep neural networks (DNNs) have shown their success as high-dimensional function approximators in many applications; however, training DNNs can be challenging in general. DNN training is commonly phrased as a stochastic optimization problem whose challenges include nonconvexity, nonsmoothness, insufficient regularization, and complicated data distributions. Hence, the performance of DNNs on a given task depends crucially on tuning hyperparameters, especially learning rates and regularization parameters. In the absence of theoretical guidelines or prior experience on similar tasks, this requires solving a series of repeated training problems which can be time-consuming and demanding on computational resources. This can limit the applicability of DNNs to problems with nonstandard, complex, and scarce datasets, e.g., those arising in many scientific applications. To remedy the challenges of DNN training, we propose slimTrain, a stochastic optimization method for training DNNs with reduced sensitivity to the choice of hyperparameters and fast initial convergence. The central idea of slimTrain is to exploit the separability inherent in many DNN architectures; that is, we separate the DNN into a nonlinear feature extractor followed by a linear model. This separability allows us to leverage recent advances made for solving large-scale, linear, ill-posed inverse problems. Crucially, for the linear weights, slimTrain does not require a learning rate and automatically adapts the regularization parameter. In our numerical experiments using function approximation tasks arising in surrogate modeling and dimensionality reduction, slimTrain outperforms existing DNN training methods with the recommended hyperparameter settings and reduces the sensitivity of DNN training to the remaining hyperparameters. Since our method operates on mini-batches, its computational overhead per iteration is modest and savings can be realized by reducing the number of iterations (due to quicker initial convergence) or the number of training problems that need to be solved to identify effective hyperparameters. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10648275
Volume :
44
Issue :
4
Database :
Academic Search Index
Journal :
SIAM Journal on Scientific Computing
Publication Type :
Academic Journal
Accession number :
159825127
Full Text :
https://doi.org/10.1137/21M1452512