1. Teaching learning-based whale optimization algorithm for multi-layer perceptron neural network training
- Author
-
Yan Biao Niu, Ming Jiang, Yong Quan Zhou, and Qi Fang Luo
- Subjects
Computer science ,Boundary (topology) ,02 engineering and technology ,teaching learning-based ,Simplex algorithm ,0502 economics and business ,Convergence (routing) ,0202 electrical engineering, electronic engineering, information engineering ,QA1-939 ,whale optimization algorithm ,Optimization algorithm ,Artificial neural network ,business.industry ,Applied Mathematics ,05 social sciences ,Training (meteorology) ,General Medicine ,Perceptron ,multi-layer perceptron (mlp) neural network ,Computational Mathematics ,Modeling and Simulation ,020201 artificial intelligence & image processing ,Artificial intelligence ,metaheuristic algorithm ,General Agricultural and Biological Sciences ,Teaching learning ,business ,050203 business & management ,TP248.13-248.65 ,Mathematics ,Biotechnology - Abstract
This paper presents an improved teaching learning-based whale optimization algorithm (TSWOA) used the simplex method. First of all, the combination of WOA algorithm and teaching learning-based algorithm not only achieves a better balance between exploration and exploitation of WOA, but also makes whales have self-learning ability from the biological background, and greatly enriches the theory of the original WOA algorithm. Secondly, the WOA algorithm adds the simplex method to optimize the current worst unit, averting the agents to search at the boundary, and increasing the convergence accuracy and speed of the algorithm. To evaluate the performance of the improved algorithm, the TSWOA algorithm is employed to train the multi-layer perceptron (MLP) neural network. It is a difficult thing to propose a well-pleasing and valid algorithm to optimize the multi-layer perceptron neural network. Fifteen different data sets were selected from the UCI machine learning knowledge and the statistical results were compared with GOA, GSO, SSO, FPA, GA and WOA, severally. The statistical results display that better performance of TSWOA compared to WOA and several well-established algorithms for training multi-layer perceptron neural networks.
- Published
- 2020
- Full Text
- View/download PDF