Distributed Learning of Deep Neural Networks using Independent Subnet Training

by   Binhang Yuan, et al.

Stochastic gradient descent (SGD) is the method of choice for distributed machine learning, by virtue of its light complexity per iteration on compute nodes, leading to almost linear speedups in theory. Nevertheless, such speedups are rarely observed in practice, due to high communication overheads during synchronization steps. We alleviate this problem by introducing independent subnet training: a simple, jointly model-parallel and data-parallel, approach to distributed training for fully connected, feed-forward neural networks. During subnet training, neurons are stochastically partitioned without replacement, and each partition is sent only to a single worker. This reduces the overall synchronization overhead, as each worker only receives the weights associated with the subnetwork it has been assigned to. Subnet training also reduces synchronization frequency: since workers train disjoint portions of the network, the training can proceed for long periods of time before synchronization, similar to local SGD approaches. We empirically evaluate our approach on real-world speech recognition and product recommendation applications, where we observe that subnet training i) results into accelerated training times, as compared to state of the art distributed models, and ii) often results into boosting the testing accuracy, as it implicitly combines dropout and batch normalization regularizations during training.


page 1

page 2

page 3

page 4


Performance Optimization on Model Synchronization in Parallel Stochastic Gradient Descent Based SVM

Understanding the bottlenecks in implementing stochastic gradient descen...

Restructuring, Pruning, and Adjustment of Deep Models for Parallel Distributed Inference

Using multiple nodes and parallel computing algorithms has become a prin...

Parallel Dither and Dropout for Regularising Deep Neural Networks

Effective regularisation during training can mean the difference between...

HLSGD Hierarchical Local SGD With Stale Gradients Featuring

While distributed training significantly speeds up the training process ...

Exponential Moving Average Model in Parallel Speech Recognition Training

As training data rapid growth, large-scale parallel training with multi-...

Shuffle-Exchange Brings Faster: Reduce the Idle Time During Communication for Decentralized Neural Network Training

As a crucial scheme to accelerate the deep neural network (DNN) training...

Fully-Asynchronous Fully-Implicit Variable-Order Variable-Timestep Simulation of Neural Networks

State-of-the-art simulations of detailed neural models follow the Bulk S...

Please sign up or login with your details

Forgot password? Click here to reset