1. ezLDA: Efficient and Scalable LDA on GPUs
- Author
-
Shilong Wang, Hang Liu, Anil Gaihre, and Hengyong Yu
- Subjects
Bayes methods ,GPU ,high performance computing ,latent dirichlet allocation ,LDA ,parallel algorithms ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Latent Dirichlet Allocation (LDA) is a statistical approach for topic modeling with a wide range of applications. Attracted by the exceptional computing and memory throughput capabilities, this work introduces ezLDA which achieves efficient and scalable LDA training on GPUs with the following three contributions: First, ezLDA introduces three-branch sampling method which takes advantage of the convergence heterogeneity of various tokens to reduce the redundant sampling task. Second, to enable sparsity-aware format for both D and W on GPUs with fast sampling and updating, we introduce hybrid format for W along with corresponding token partition to T and inverted index designs. Third, we design a hierarchical workload balancing solution to address the extremely skewed workload imbalance problem on GPU and scale ezLDA across multiple GPUs. Taken together, ezLDA achieves superior performance over the state-of-the-art attempts with lower memory consumption.
- Published
- 2023
- Full Text
- View/download PDF