1. Hybrid Sine Cosine and Fitness Dependent Optimizer for Global Optimization
- Author
-
Po Chan Chiu, Ali Selamat, Ondrej Krejcar, and King Kuok Kuok
- Subjects
Fitness dependent optimizer ,sine cosine algorithm ,missing data ,high missing rates ,imputation ,metaheuristic algorithms ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
The fitness-dependent optimizer (FDO), a newly proposed swarm intelligent algorithm, is focused on the reproductive mechanism of bee swarming and collective decision-making. To optimize the performance, FDO calculates velocity (pace) differently. FDO calculates weight using the fitness function values to update the search agent position during the exploration and exploitation phases. However, the FDO encounters slow convergence and unbalanced exploitation and exploration. Hence, this study proposes a novel hybrid of the sine cosine algorithm and fitness-dependent optimizer (SC-FDO) for updating the velocity (pace) using the sine cosine scheme. This proposed algorithm, SC-FDO, has been tested over 19 classical and 10 IEEE Congress of Evolutionary Computation (CEC-C06 2019) benchmark test functions. The findings revealed that SC-FDO achieved better performances in most cases than the original FDO and well-known optimization algorithms. The proposed SC-FDO improved the original FDO by achieving a better exploit-explore tradeoff with a faster convergence speed. The SC-FDO was applied to the missing data estimation cases and refined the missingness as optimization problems. This is the first time, to our knowledge, that nature-inspired algorithms have been considered for handling time series datasets with low and high missingness problems (10%-90%). The impacts of missing data on the predictive ability of the proposed SC-FDO were evaluated using a large weather dataset from 1985 until 2020. The results revealed that the imputation sensitivity depends on the percentages of missingness and the imputation models. The findings demonstrated that the SC-FDO based multilayer perceptron (MLP) trainer outperformed the other three optimizer trainers with the highest average accuracy of 90% when treating the high-low missingness in the dataset.
- Published
- 2021
- Full Text
- View/download PDF