1. Back-propagating errors through artificial neural networks for variable selection.
- Author
-
Hu, Junying, Chang, Peiju, Du, Fang, Fei, Rongrong, Sun, Kai, Zhang, Jiangshe, and Zhang, Hai
- Subjects
- *
ARTIFICIAL neural networks , *DEEP learning , *IMAGE recognition (Computer vision) , *ABSOLUTE value , *MACHINE learning - Abstract
Variable selection is one of the most important contents of machine learning research and a large number of variable selection methods have been proposed at present. From deep learning perspective, we propose a new variable selection method based on multi-layer artificial neural networks in this paper. The method propagates errors gained form a trained network model through the network from output layer to input layer and the values gained by input layer give information about the relative contribution. The variables with big contribution value are selected, which we call neural inverse propagation (NIP). Specifically, after training a neural network, the gained errors between the actual network outputs and the desired outputs go through the network from top to down and arrive at the input layer. The absolute values gained by input units provide information about the relative importance (contribution) of input units, which is that the larger the value is, the more important the corresponding input variable is. We remove k variables corresponding to the first k minimum absolute value gained by input units to the predictive variables, to perform variable selection. The efficiency of the proposed method is illustrated through its application to classification and regression tasks on a number of synthetic datasets and a real-world image classification dataset. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF