Cocktail: Leveraging Ensemble Learning for Optimized Model Serving in Public Cloud

by   Jashwant Raj Gunasekaran, et al.

With a growing demand for adopting ML models for a varietyof application services, it is vital that the frameworks servingthese models are capable of delivering highly accurate predic-tions with minimal latency along with reduced deploymentcosts in a public cloud environment. Despite high latency,prior works in this domain are crucially limited by the accu-racy offered by individual models. Intuitively, model ensem-bling can address the accuracy gap by intelligently combiningdifferent models in parallel. However, selecting the appro-priate models dynamically at runtime to meet the desiredaccuracy with low latency at minimal deployment cost is anontrivial problem. Towards this, we proposeCocktail, a costeffective ensembling-based model serving framework.Cock-tailcomprises of two key components: (i) a dynamic modelselection framework, which reduces the number of modelsin the ensemble, while satisfying the accuracy and latencyrequirements; (ii) an adaptive resource management (RM)framework that employs a distributed proactive autoscalingpolicy combined with importance sampling, to efficiently allo-cate resources for the models. The RM framework leveragestransient virtual machine (VM) instances to reduce the de-ployment cost in a public cloud. A prototype implementationofCocktailon the AWS EC2 platform and exhaustive evalua-tions using a variety of workloads demonstrate thatCocktailcan reduce deployment cost by 1.45x, while providing 2xreduction in latency and satisfying the target accuracy for upto 96 state-of-the-artmodel-serving frameworks.


page 5

page 10

page 11


Reconciling High Accuracy, Cost-Efficiency, and Low Latency of Inference Serving Systems

The use of machine learning (ML) inference for various applications is g...

Towards Designing a Self-Managed Machine Learning Inference Serving System inPublic Cloud

We are witnessing an increasing trend towardsusing Machine Learning (ML)...

InferLine: ML Inference Pipeline Composition Framework

The dominant cost in production machine learning workloads is not traini...

Serving deep learning models in a serverless platform

Serverless computing has emerged as a compelling paradigm for the develo...

HOLMES: Health OnLine Model Ensemble Serving for Deep Learning Models in Intensive Care Units

Deep learning models have achieved expert-level performance in healthcar...

BARISTA: Efficient and Scalable Serverless Serving System for Deep Learning Prediction Services

Pre-trained deep learning models are increasingly being used to offer a ...

Characterizing the Cloud's Outbound Network Latency: An Experimental and Modeling Study

Cloud latency has critical influences on the success of cloud applicatio...

Please sign up or login with your details

Forgot password? Click here to reset