Back to Search
Start Over
Improving the reliability of test functions generators.
- Source :
- Applied Soft Computing; Jul2020, Vol. 92, pN.PAG-N.PAG, 1p
- Publication Year :
- 2020
-
Abstract
- Computational intelligence methods have gained importance in several real-world domains such as process optimization, system identification, data mining, or statistical quality control. Tools are missing, which determine the performance of computational intelligence methods in these application domains in an objective manner. Statistics provide methods for comparing algorithms on certain data sets. In the past, several test suites were presented and considered as state of the art. However, there are several drawbacks of these test suites, namely: (i) problem instances are somehow artificial and have no direct link to real-world settings; (ii) since there is a fixed number of test instances, algorithms can be fitted or tuned to this specific and very limited set of test functions; (iii) statistical tools for comparisons of several algorithms on several test problem instances are relatively complex and not easily to analyze. We propose a methodology to overcome these difficulties. It is based on standard ideas from statistics: analysis of variance and its extension to mixed models. This paper combines essential ideas from two approaches: problem generation and statistical analysis of computer experiments. • Random selection of test problems plus mixed-model ANOVA enables generalization. • Overcomes over-fitting on fixed artificial test problems. • Suitable for both single and multi-algorithm performance on whole problem classes. • Pairwise algorithm performance comparison with confidence intervals. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 15684946
- Volume :
- 92
- Database :
- Supplemental Index
- Journal :
- Applied Soft Computing
- Publication Type :
- Academic Journal
- Accession number :
- 143385881
- Full Text :
- https://doi.org/10.1016/j.asoc.2020.106315