Back to Search Start Over

QuicK-means: Acceleration of K-means by learning a fast transform

Authors :
Luc Giffon
Valentin Emiya
Hachem Kadri
Liva Ralaivola
éQuipe d'AppRentissage de MArseille (QARMA)
Laboratoire d'Informatique et Systèmes (LIS)
Aix Marseille Université (AMU)-Université de Toulon (UTLN)-Centre National de la Recherche Scientifique (CNRS)-Aix Marseille Université (AMU)-Université de Toulon (UTLN)-Centre National de la Recherche Scientifique (CNRS)
Criteo [Paris]
ANR-16-CE23-0006,Deep_in_France,Réseaux de neurones profonds pour l'apprentissage(2016)
Source :
Machine Learning, Machine Learning, Springer Verlag, 2021, Machine Learning, 110, pp.881-905. ⟨10.1007/s10994-021-05965-0⟩, HAL, Machine Learning, 2021, Machine Learning, 110, pp.881-905. ⟨10.1007/s10994-021-05965-0⟩
Publication Year :
2021
Publisher :
HAL CCSD, 2021.

Abstract

International audience; K-means -- and the celebrated Lloyd algorithm -- is more than the clustering method it was originally designed to be. It has indeed proven pivotal to help increase the speed of many machine learning and data analysis techniques such as indexing, nearest-neighbor search and prediction, data compression, Radial Basis Function networks; its beneficial use has been shown to carry over to the acceleration of kernel machines (when using the Nyström method). Here, we propose a fast extension of K-means, dubbed QuicK-means, that rests on the idea of expressing the matrix of the $K$ centroids as a product of sparse matrices, a feat made possible by recent results devoted to find approximations of matrices as a product of sparse factors. Using such a decomposition squashes the complexity of the matrix-vector product between the factorized $K \times D$ centroid matrix $\mathbf{U}$ and any vector from $\mathcal{O}(K D)$ to $\mathcal{O}(A \log A+B)$, with $A=\min (K, D)$ and $B=\max (K, D)$, where $D$ is the dimension of the training data. This drastic computational saving has a direct impact in the assignment process of a point to a cluster, meaning that it is not only tangible at prediction time, but also at training time, provided the factorization procedure is performed during Lloyd's algorithm. We precisely show that resorting to a factorization step at each iteration does not impair the convergence of the optimization scheme and that, depending on the context, it may entail a reduction of the training time. Finally, we provide discussions and numerical simulations that show the versatility of our computationally-efficient QuicK-means algorithm.

Details

Language :
English
ISSN :
08856125 and 15730565
Database :
OpenAIRE
Journal :
Machine Learning, Machine Learning, Springer Verlag, 2021, Machine Learning, 110, pp.881-905. ⟨10.1007/s10994-021-05965-0⟩, HAL, Machine Learning, 2021, Machine Learning, 110, pp.881-905. ⟨10.1007/s10994-021-05965-0⟩
Accession number :
edsair.doi.dedup.....87c904d79552fb9a1fc8d3e65b1bf188