Communication Scheduling as a First-Class Citizen in Distributed Machine Learning Systems
State-of-the-art machine learning systems rely on graph-based models, with the distributed training of these models being the norm in AI-powered production pipelines. The performance of these communication-heavy systems depends on the effective overlap of communication and computation. While the overlap challenge has been addressed in systems with simpler model representations, it remains an open problem in graph-based models. In this work, we develop a system for communication scheduling which realizes near-optimal overlap of communication and computation in graph-based models. Our system is implemented over TensorFlow and requires no changes in the model or developer inputs. Our system improves the throughput by up to 82 inference and 20 2.8x. A part of our implementation is already merged with TensorFlow codebase; the rest is publicly available.
READ FULL TEXT