Understanding Training Efficiency of Deep Learning Recommendation Models at Scale

11/11/2020
by   Bilge Acun, et al.
0

The use of GPUs has proliferated for machine learning workflows and is now considered mainstream for many deep learning models. Meanwhile, when training state-of-the-art personal recommendation models, which consume the highest number of compute cycles at our large-scale datacenters, the use of GPUs came with various challenges due to having both compute-intensive and memory-intensive components. GPU performance and efficiency of these recommendation models are largely affected by model architecture configurations such as dense and sparse features, MLP dimensions. Furthermore, these models often contain large embedding tables that do not fit into limited GPU memory. The goal of this paper is to explain the intricacies of using GPUs for training recommendation models, factors affecting hardware efficiency at scale, and learnings from a new scale-up GPU server design, Zion.

READ FULL TEXT

page 3

page 8

research
04/11/2022

Heterogeneous Acceleration Pipeline for Recommendation System Training

Recommendation systems are unique as they show a conflation of compute a...
research
05/31/2019

Deep Learning Recommendation Model for Personalization and Recommendation Systems

With the advent of deep learning, neural network-based recommendation mo...
research
01/19/2022

Building a Performance Model for Deep Learning Recommendation Model Training on GPUs

We devise a performance model for GPU training of Deep Learning Recommen...
research
08/17/2017

Efficient Use of Limited-Memory Accelerators for Linear Learning on Heterogeneous Systems

We propose a generic algorithmic building block to accelerate training o...
research
04/24/2023

Exploring shared memory architectures for end-to-end gigapixel deep learning

Deep learning has made great strides in medical imaging, enabled by hard...
research
03/01/2021

High-Performance Training by Exploiting Hot-Embeddings in Recommendation Systems

Recommendation models are commonly used learning models that suggest rel...
research
09/03/2023

FusionAI: Decentralized Training and Deploying LLMs with Massive Consumer-Level GPUs

The rapid growth of memory and computation requirements of large languag...

Please sign up or login with your details

Forgot password? Click here to reset