Whale: A Unified Distributed Training Framework

11/18/2020
by   Ang Wang, et al.
0

Data parallelism (DP) has been a common practice to speed up the training workloads for a long time. However, with the increase of data size and model size, DP has become less optimal for most distributed training workloads. Moreover, it does not work on models whose parameter size cannot fit into a single GPU's device memory. To enable and further improve the industrial-level giant model training, we present Whale, a unified distributed training framework. It provides comprehensive parallel strategies including data parallelism, model parallelism, operator sharding, pipeline, hybrid strategy, and automatic parallel strategy. To express complex training strategies effectively and efficiently in one framework, Whale IR is designed as the basic unit to explore and implement different distributed strategies. Moreover, Whale enables automatic parallelism upon using a meta-driven cost model. Whale is compatible with TensorFlow and can easily distribute training tasks by adding a few code lines without changing user model code. To the best of our knowledge, Whale is the first work that can support various hybrid distributed strategies within one framework. In our experiment of Bert Large model, Whale pipeline strategy is 2.32 times faster than Horovod data parallelism (HDP) on 64 GPUs. In a large-scale image classification task (100,000 classes), Whale hybrid strategy, which consists of operator sharding and DP, is 14.8 times faster than HDP on 64 GPUs.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset