Back to Search Start Over

A Hybrid Method of Backpropagation and Particle Swarm Optimization for Enhancing Accuracy Performance

Authors :
I. Made Widiartha
Anak Agung Ngurah Gunawan
E. R. Ngurah Agus Sanjaya
Kartika Sari
Source :
Current Journal of Applied Science and Technology. 42:10-18
Publication Year :
2023
Publisher :
Sciencedomain International, 2023.

Abstract

Aims: Backpropagation is an algorithm for adjusting the weight of neural networks in the training stage. The performance of backpropagation has proven superior in optimizing the weight of neural networks; however, this method needs improvement in the initiation stage, where the random process creates local optimal solution. Applying an algorithm based on the global search is an alternative to solve the drawback of backpropagation. One global search method with superior performance is particle swarm optimization. In this research, we apply the hybridization of backpropagation and particle swarm optimization (BP-PSO) to overcome the problem of backpropagation. Study Design: Research Papers and Short Notes. Place and Duration of Study: Department of Informatics, Faculty of Mathematics and Natural Sciences, Udayana University, between June 2022 and November 2022. Methodology: The dataset used in this study is a handwriting image dataset of the mathematical symbol. There are 240 symbols consisting of 180 images for training and 60 for testing. The robustness of the PSO method in obtaining the optimum global solution is expected to help backpropagation out of local optimal solutions. The application of PSO is carried out at the initial weight initialization stage of the artificial neural network. The tuning parameters of the artificial neural network are the number of neurons in the hidden layer and the value of the learning rate. There are three combinations in the number of neurons in the hidden layer, namely 10, 20, and 30. Meanwhile, the learning rate values are five different combinations, namely 0.1 to 0.9, the minimum error value is 0.01, and the maximum number of epochs is 1000. We carry out five repetitions in each test scenario. Results: The performance results showed that PSO has succeeded in optimizing backpropagation, where the accuracy of the BP-PSO is higher than BP without optimization. The accuracy of BP-PSO is 97.2%, while the BP is 94.4%. The optimal learning rate value and the optimal number of hidden layers are 0.1 and 30 neurons, respectively. Conclusion: The performance results showed that PSO has succeeded in optimizing backpropagation, where the accuracy of the BP-PSO is higher than BP without optimization. The optimization process of weighting the artificial neural network as the initial weight for later retraining shows a higher average accuracy, while decreasing the average number of epochs does not optimize initial weight.

Subjects

Subjects :
Psychiatry and Mental health

Details

ISSN :
24571024
Volume :
42
Database :
OpenAIRE
Journal :
Current Journal of Applied Science and Technology
Accession number :
edsair.doi...........9e7bf6138b33f77ec6f056feb191e25c
Full Text :
https://doi.org/10.9734/cjast/2023/v42i64072