A Comparative Measurement Study of Deep Learning as a Service Framework

10/29/2018
by   Yanzhao Wu, et al.
0

Big data powered Deep Learning (DL) and its applications have blossomed in recent years, fueled by three technological trends: a large amount of digitized data openly accessible, a growing number of DL software frameworks in open source and commercial markets, and a selection of affordable parallel computing hardware devices. However, no single DL framework, to date, dominates in terms of performance and accuracy even for baseline classification tasks on standard datasets, making the selection of a DL framework an overwhelming task. This paper takes a holistic approach to conduct empirical comparison and analysis of four representative DL frameworks with three unique contributions. First, given a selection of CPU-GPU configurations, we show that for a specific DL framework, different configurations of its hyper-parameters may have significant impact on both performance and accuracy of DL applications. Second, the optimal configuration of hyper-parameters for one DL framework (e.g., TensorFlow) often does not work well for another DL framework (e.g., Caffe or Torch) under the same CPU-GPU runtime environment. Third, we also conduct a comparative measurement study on the resource consumption patterns of four DL frameworks and their performance and accuracy implications, including CPU and memory usage, and their correlations to varying settings of hyper-parameters under different configuration combinations of hardware, parallel computing libraries. We argue that this measurement study provides in-depth empirical comparison and analysis of four representative DL frameworks, and offers practical guidance for service providers to deploying and delivering DL as a Service (DLaaS) and for application developers and DLaaS consumers to select the right DL frameworks for the right DL workloads.

READ FULL TEXT
research
10/09/2022

Deep Learning Inference Frameworks Benchmark

Deep learning (DL) has been widely adopted those last years but they are...
research
09/05/2023

Comparative Analysis of CPU and GPU Profiling for Deep Learning Models

Deep Learning(DL) and Machine Learning(ML) applications are rapidly incr...
research
09/13/2021

Automatic Tuning of Tensorflow's CPU Backend using Gradient-Free Optimization Algorithms

Modern deep learning (DL) applications are built using DL libraries and ...
research
11/04/2020

InferBench: Understanding Deep Learning Inference Serving with an Automatic Benchmarking System

Deep learning (DL) models have become core modules for many applications...
research
11/30/2016

The observer-assisted method for adjusting hyper-parameters in deep learning algorithms

This paper presents a concept of a novel method for adjusting hyper-para...
research
11/18/2016

GaDei: On Scale-up Training As A Service For Deep Learning

Deep learning (DL) training-as-a-service (TaaS) is an important emerging...
research
03/21/2023

DIPPM: a Deep Learning Inference Performance Predictive Model using Graph Neural Networks

Deep Learning (DL) has developed to become a corner-stone in many everyd...

Please sign up or login with your details

Forgot password? Click here to reset