101. Choosing the Best Auto-Encoder-Based Bagging Classifier: An Empirical Study
- Author
-
Wenge Rong, Zhang Xiong, Yikang Shen, Yifan Nie, and Chao Li
- Subjects
Empirical research ,Artificial neural network ,business.industry ,Computer science ,Artificial intelligence ,business ,Machine learning ,computer.software_genre ,Generalization error ,Autoencoder ,Feature learning ,Classifier (UML) ,computer - Abstract
Feature learning plays an important role in many machine learning tasks. As a common implementation for feature learning, the auto-encoder has shown excellent performance. However it also faces several challenges among which a notable one is how to reduce its generalization error. Different approaches have been proposed to solve this problem and Bagging is lauded as a possible one since it is easily implemented while can also expect outstanding performance. This paper studies the problem of integrating different prediction models by bagging auto-encoder-based classifiers in order to reduce generalization error and improve prediction performance. Furthermore, experimental study on different datasets from different domains is conducted. Several integration schemas are empirically evaluated to analyse their pros and cons. It is believed that this work will offer researchers in this field insight in bagging auto-encoder-based classifiers.
- Published
- 2014
- Full Text
- View/download PDF