Poseidon: An Efficient Communication Architecture for Distributed Deep Learning on GPU Clusters

06/11/2017
by   Hao Zhang, et al.
0

Deep learning models can take weeks to train on a single GPU-equipped machine, necessitating scaling out DL training to a GPU-cluster. However, current distributed DL implementations can scale poorly due to substantial parameter synchronization over the network, because the high throughput of GPUs allows more data batches to be processed per unit time than CPUs, leading to more frequent network synchronization. We present Poseidon, an efficient communication architecture for distributed DL on GPUs. Poseidon exploits the layered model structures in DL programs to overlap communication and computation, reducing bursty network communication. Moreover, Poseidon uses a hybrid communication scheme that optimizes the number of bytes required to synchronize each layer, according to layer properties and the number of machines. We show that Poseidon is applicable to different DL frameworks by plugging Poseidon into Caffe and TensorFlow. We show that Poseidon enables Caffe and TensorFlow to achieve 15.5x speed-up on 16 single-GPU machines, even with limited bandwidth (10GbE) and the challenging VGG19-22K network for image classification. Moreover, Poseidon-enabled TensorFlow achieves 31.5x speed-up with 32 single-GPU machines on Inception-V3, a 50 open-source TensorFlow (20x speed-up).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/19/2015

Poseidon: A System Architecture for Efficient GPU-based Deep Learning on Multiple Machines

Deep learning (DL) has achieved notable successes in many machine learni...
research
02/15/2018

Horovod: fast and easy distributed deep learning in TensorFlow

Training modern deep learning models requires large amounts of computati...
research
10/06/2018

Characterizing Deep-Learning I/O Workloads in TensorFlow

The performance of Deep-Learning (DL) computing frameworks rely on the p...
research
01/30/2020

Non-Determinism in TensorFlow ResNets

We show that the stochasticity in training ResNets for image classificat...
research
11/30/2019

Training Distributed Deep Recurrent Neural Networks with Mixed Precision on GPU Clusters

In this paper, we evaluate training of deep recurrent neural networks wi...
research
04/03/2018

Designing a Micro-Benchmark Suite to Evaluate gRPC for TensorFlow: Early Experiences

Remote procedure call (RPC) is the backbone of many modern distributed s...
research
03/11/2018

Salable Breadth-First Search on a GPU Cluster

On a GPU cluster, the ratio of high computing power to communication ban...

Please sign up or login with your details

Forgot password? Click here to reset