-
Notifications
You must be signed in to change notification settings - Fork 14
Home
It was found in this stackoverflow question: https://stackoverflow.com/questions/20416944/parallel-k-means-in-r
R package can be found here: https://cran.r-project.org/web/packages/knor/index.html
It is based on this research paper: https://arxiv.org/abs/1606.08905
From abstract
k-means is one of the most influential and utilized machine learning algorithms. Its computation limits the performance and scalability of many statistical analysis and machine learning tasks. We rethink and optimize k-means in terms of modern NUMA architectures to develop a novel parallelization scheme that delays and minimizes synchronization barriers. The k-means NUMA Optimized Routine
knor
library has (i) in-memoryknori
, (ii) distributed memoryknord
, and (iii) semi-external memoryknors
modules that radically improve the performance of k-means for varying memory and hardware budgets.knori
boosts performance for single machine datasets by an order of magnitude or more.knors
improves the scalability of k-means on a memory budget using SSDs.knors
scales to billions of points on a single machine, using a fraction of the resources that distributed in-memory systems require.knord
retainsknori
's performance characteristics, while scaling in-memory through distributed computation in the cloud.knor
modifies Elkan's triangle inequality pruning algorithm such that we utilize it on billion-point datasets without the significant memory overhead of the original algorithm. We demonstrateknor
outperforms distributed commercial products like H2O, Turi (formerly Dato, GraphLab) and Spark's MLlib by more than an order of magnitude for datasets of 107 to 109 points.
Original paper can be found here: https://www.aaai.org/Papers/ICML/2003/ICML03-022.pdf
From abstract
The k-means algorithm is by far the most widely used method for discovering clusters in data. We show how to accelerate it dramatically, while still always computing exactly the same result as the standard algorithm. The accelerated algorithm avoids unnecessary distance calculations by applying the triangle inequality in two different ways, and by keeping track of lower and upper bounds for distances between points and centers. Experiments show that the new algorithm is effective for datasets with up to 1000 dimensions, and becomes more and more effective as the number k of clusters increases. For k > 20 it is many times faster than the best previously known accelerated k-means method.
Original paper can be found here: https://www.researchgate.net/publication/220906984_Making_k-means_Even_Faster
From abstract
The k-means algorithm is widely used for clustering, compressing, and summarizing vector data. In this paper, we propose a new acceleration for exact k-means that gives the same answer, but is much faster in practice. Like Elkan's accelerated algorithm (8), our algorithm avoids distance computations using distance bounds and the triangle inequality. Our algorithm uses one novel lower bound for point-center distances, which allows it to eliminate the innermost k-means loop 80% of the time or more in our experiments. On datasets of low and medium dimension (e.g. up to 50 dimensions), our algorithm is much faster than other methods, including methods based on low-dimensional indexes, such as k-d trees. Other advantages are that it is very simple to implement and it has a very small memory overhead, much smaller than other accelerated algorithms.
Paper link: http://cs.baylor.edu/~hamerly/papers/sdm2016_rysavy_hamerly.pdf
From abstract
The k-means algorithm is popular for data clustering applications. Most implementations use Lloyd’s algorithm, which does many unnecessary distance calculations. Several accelerated algorithms (Elkan’s, Hamerly’s, heap, etc.) have recently been developed which produce exactly the same answer as Lloyd’s, only faster. They avoid redundant work using the triangle inequality paired with a set of lower and upper bounds on point-centroid distances. In this paper we propose several novel methods that allow those accelerated algorithms to perform even better, giving up to eight times further speedup. Our methods give tighter lower bound updates, efficiently skip centroids that cannot possibly be close to a set of points, keep extra information about upper bounds to help the heap algorithm avoid more distance computations, and decrease the number of distance calculations that are done in the first iteration.