Back to Search
Start Over
Novel maximum-margin training algorithms for supervised neural networks
- Source :
- IEEE transactions on neural networks. 21(6)
- Publication Year :
- 2010
-
Abstract
- This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer perceptron (MLP) binary classifiers. Both backpropagation methods are based on the maximal-margin (MM) principle. The first one, based on the gradient descent with adaptive learning rate algorithm (GDX) and named maximum-margin GDX (MMGDX), directly increases the margin of the MLP output-layer hyperplane. The proposed method jointly optimizes both MLP layers in a single process, backpropagating the gradient of an MM-based objective function, through the output and hidden layers, in order to create a hidden-layer space that enables a higher margin for the output-layer hyperplane, avoiding the testing of many arbitrary kernels, as occurs in case of support vector machine (SVM) training. The proposed MM-based objective function aims to stretch out the margin to its limit. An objective function based on Lp-norm is also proposed in order to take into account the idea of support vectors, however, overcoming the complexity involved in solving a constrained optimization problem, usually in SVM training. In fact, all the training methods proposed in this paper have time and space complexities O(N) while usual SVM training methods have time complexity O(N (3)) and space complexity O(N (2)) , where N is the training-data-set size. The second approach, named minimization of interclass interference (MICI), has an objective function inspired on the Fisher discriminant analysis. Such algorithm aims to create an MLP hidden output where the patterns have a desirable statistical distribution. In both training methods, the maximum area under ROC curve (AUC) is applied as stop criterion. The third approach offers a robust training framework able to take the best of each proposed training method. The main idea is to compose a neural model by using neurons extracted from three other neural networks, each one previously trained by MICI, MMGDX, and Levenberg-Marquard (LM), respectively. The resulting neural network was named assembled neural network (ASNN). Benchmark data sets of real-world problems have been used in experiments that enable a comparison with other state-of-the-art classifiers. The results provide evidence of the effectiveness of our methods regarding accuracy, AUC, and balanced error rate.
- Subjects :
- Computer Networks and Communications
Computer science
Information Theory
Feedback
Pattern Recognition, Automated
Kernel (linear algebra)
Artificial Intelligence
Margin (machine learning)
Humans
Learning
Computer Simulation
Time complexity
Artificial neural network
business.industry
Supervised learning
Pattern recognition
General Medicine
Linear discriminant analysis
Backpropagation
Computer Science Applications
Support vector machine
Hyperplane
ROC Curve
Multilayer perceptron
Artificial intelligence
Neural Networks, Computer
Gradient descent
business
Algorithm
Software
Algorithms
Subjects
Details
- ISSN :
- 19410093
- Volume :
- 21
- Issue :
- 6
- Database :
- OpenAIRE
- Journal :
- IEEE transactions on neural networks
- Accession number :
- edsair.doi.dedup.....ae2ed4691a895979077597eb14c3e3e8