Shuffle-Exchange Brings Faster: Reduce the Idle Time During Communication for Decentralized Neural Network Training

07/01/2020
by   Xiang Yang, et al.
0

As a crucial scheme to accelerate the deep neural network (DNN) training, distributed stochastic gradient descent (DSGD) is widely adopted in many real-world applications. In most distributed deep learning (DL) frameworks, DSGD is implemented with Ring-AllReduce architecture (Ring-SGD) and uses a computation-communication overlap strategy to address the overhead of the massive communications required by DSGD. However, we observe that although O(1) gradients are needed to be communicated per worker in Ring-SGD, the O(n) handshakes required by Ring-SGD limits its usage when training with many workers or in high latency network. In this paper, we propose Shuffle-Exchange SGD (SESGD) to solve the dilemma of Ring-SGD. In the cluster of 16 workers with 0.1ms Ethernet latency, SESGD can accelerate the DNN training to 1.7 × without losing model accuracy. Moreover, the process can be accelerated up to 5× in high latency networks (5ms).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset