CuLDA_CGS: Solving Large-scale LDA Problems on GPUs

03/13/2018
by   Xiaolong Xie, et al.
0

Latent Dirichlet Allocation(LDA) is a popular topic model. Given the fact that the input corpus of LDA algorithms consists of millions to billions of tokens, the LDA training process is very time-consuming, which may prevent the usage of LDA in many scenarios, e.g., online service. GPUs have benefited modern machine learning algorithms and big data analysis as they can provide high memory bandwidth and computation power. Therefore, many frameworks, e.g. Ten- sorFlow, Caffe, CNTK, support to use GPUs for accelerating the popular machine learning data-intensive algorithms. However, we observe that LDA solutions on GPUs are not satisfying. In this paper, we present CuLDA_CGS, a GPU-based efficient and scalable approach to accelerate large-scale LDA problems. CuLDA_CGS is designed to efficiently solve LDA problems at high throughput. To it, we first delicately design workload partition and synchronization mechanism to exploit the benefits of mul- tiple GPUs. Then, we offload the LDA sampling process to each individual GPU by optimizing from the sampling algorithm, par- allelization, and data compression perspectives. Evaluations show that compared with state-of-the-art LDA solutions, CuLDA_CGS outperforms them by a large margin (up to 7.3X) on a single GPU. CuLDA_CGS is able to achieve extra 3.0X speedup on 4 GPUs. The source code is publicly available on https://github.com/cuMF/ CuLDA_CGS.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/17/2020

EZLDA: Efficient and Scalable LDA on GPUs

LDA is a statistical approach for topic modeling with a wide range of ap...
research
10/08/2016

SaberLDA: Sparsity-Aware Learning of Topic Models on GPUs

Latent Dirichlet Allocation (LDA) is a popular tool for analyzing discre...
research
12/02/2021

MegBA: A High-Performance and Distributed Library for Large-Scale Bundle Adjustment

Large-scale Bundle Adjustment (BA) is the key for many 3D vision applica...
research
08/11/2018

Matrix Factorization on GPUs with Memory Optimization and Approximate Computing

Matrix factorization (MF) discovers latent features from observations, w...
research
07/07/2023

CODAG: Characterizing and Optimizing Decompression Algorithms for GPUs

Data compression and decompression have become vital components of big-d...
research
06/04/2019

On Privacy Protection of Latent Dirichlet Allocation Model Training

Latent Dirichlet Allocation (LDA) is a popular topic modeling technique ...
research
10/22/2015

Multi-GPU Distributed Parallel Bayesian Differential Topic Modelling

There is an explosion of data, documents, and other content, and people ...

Please sign up or login with your details

Forgot password? Click here to reset