1. Improving k-means through distributed scalable metaheuristics
- Author
-
F.P. Coutinho, Ricardo J. G. B. Campello, Murilo Coelho Naldi, and G.V. Oliveira
- Subjects
Theoretical computer science ,Computer science ,Cognitive Neuroscience ,Evolutionary algorithm ,k-means clustering ,02 engineering and technology ,computer.software_genre ,01 natural sciences ,Computer Science Applications ,010104 statistics & probability ,ALGORITMOS GENÉTICOS ,Artificial Intelligence ,Distributed algorithm ,Scalability ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,Data mining ,0101 mathematics ,Cluster analysis ,computer ,Metaheuristic - Abstract
The recent growing size of datasets requires scalability of data mining algorithms, such as clustering algorithms. The MapReduce programing model provides the scalability needed, alongside with portability as well as automatic data safety and management. k-means is one of the most popular algorithms in data mining and can be easily adapted to the MapReduce model. Nevertheless, k-means has drawbacks, such as the need to provide the number of clusters (k) in advance and the sensitivity of the algorithm to the initial cluster prototypes. This paper presents two evolutionary scalable metaheuristics in MapReduce that automatically seek the solution with the optimal number of clusters and best clustering structure for scalable datasets. The first consists in an algorithm able to iteratively enhance k-means clusterings through evolutionary operators designed to handle distributed data. The second consists in applying evolutionary k-means to cluster each distributed portion of a dataset in an independent way, combining the obtained results into an ensemble afterwards. The proposed techniques are compared asymptotically and experimentally with other state-of-the-art clustering algorithms also developed in MapReduce. The results are analyzed by statistical tests and show that the first proposed metaheuristic yielded results with the best quality, while the second achieved the best computing times.
- Published
- 2017
- Full Text
- View/download PDF