1. Comparison of two methods of adding jitter to artificial neural network training
- Author
-
Zur, R.M., Jiang, Y., and Metz, C.E.
- Subjects
- *
ARTIFICIAL neural networks , *ARTIFICIAL intelligence , *NOISE , *COMPUTER assisted instruction - Abstract
We compare two methods of training artificial neural networks (ANNs) that potentially reduce the risk of the neural network overfitting the training data set. We refer to these methods as training with jitter. In one method of training with jitter, a new random noise vector is added to each training-data vector between successive iterations. In this work, we propose a different method of training with jitter, in which instead of adding different random noise vectors between iterations, a number of random vectors are used to expand the training data set prior to training. This artificially expanded data set is then used to train the artificial neural network in the conventional manner. These two methods are compared to the conventional method of training artificial neural networks. We find that although training with a single expanded training data set does increase the performance of the neural networks, overfitting can still occur after a large number of training iterations. [Copyright &y& Elsevier]
- Published
- 2004
- Full Text
- View/download PDF