A case for disaggregation of ML data processing

by   Andrew Audibert, et al.

Machine Learning (ML) computation requires feeding input data for the models to ingest. Traditionally, input data processing happens on the same host as the ML computation. The input data processing can however become a bottleneck of the ML computation if there are insufficient resources to process data quickly enough. This slows down the ML computation and wastes valuable and scarce ML hardware (e.g. GPUs and TPUs) used by the ML computation. In this paper, we present tf.data service, a disaggregated input data processing service built on top of tf.data. Our work goes beyond describing the design and implementation of a new system which disaggregates preprocessing from ML computation and presents: (1) empirical evidence based on production workloads for the need of disaggregation, as well as quantitative evaluation of the impact disaggregation has on the performance and cost of production workloads, (2) benefits of disaggregation beyond horizontal scaling, (3) analysis of tf.data service's adoption at Google, the lessons learned during building and deploying the system and potential future lines of research opened up by our work. We demonstrate that horizontally scaling data processing using tf.data service helps remove input bottlenecks, achieving speedups of up to 110x and job cost reductions of up to 89x. We further show that tf.data service can support computation reuse through data sharing across ML jobs with identical data processing pipelines (e.g. hyperparameter tuning jobs), incurring no performance penalty and reducing overall resource cost. Finally, we show that tf.data service advanced features can benefit performance of non-input bound jobs; in particular, coordinated data reads through tf.data service can yield up to 2x speedups and job cost savings for NLP jobs.


tf.data: A Machine Learning Data Processing Framework

Training machine learning models requires feeding input data for models ...

Near-Data Processing for Differentiable Machine Learning Models

Near-data processing (NDP) refers to augmenting memory or storage with p...

Timestamp tokens: a better coordination primitive for data-processing systems

Distributed data processing systems have advanced through models that ex...

Accelerating Machine Learning Queries with Linear Algebra Query Processing

The rapid growth of large-scale machine learning (ML) models has led num...

Towards an Intelligent Data Delivery Service

The ATLAS Event Streaming Service (ESS) at the LHC is an approach to pre...

Plumber: Diagnosing and Removing Performance Bottlenecks in Machine Learning Data Pipelines

Input pipelines, which ingest and transform input data, are an essential...

Semi-supervised on-device neural network adaptation for remote and portable laser-induced breakdown spectroscopy

Laser-induced breakdown spectroscopy (LIBS) is a popular, fast elemental...

Please sign up or login with your details

Forgot password? Click here to reset