Accelerating GAN training using highly parallel hardware on public cloud

by   Renato Cardoso, et al.

With the increasing number of Machine and Deep Learning applications in High Energy Physics, easy access to dedicated infrastructure represents a requirement for fast and efficient R D. This work explores different types of cloud services to train a Generative Adversarial Network (GAN) in a parallel environment, using Tensorflow data parallel strategy. More specifically, we parallelize the training process on multiple GPUs and Google Tensor Processing Units (TPU) and we compare two algorithms: the TensorFlow built-in logic and a custom loop, optimised to have higher control of the elements assigned to each GPU worker or TPU core. The quality of the generated data is compared to Monte Carlo simulation. Linear speed-up of the training process is obtained, while retaining most of the performance in terms of physics results. Additionally, we benchmark the aforementioned approaches, at scale, over multiple GPU nodes, deploying the training process on different public cloud providers, seeking for overall efficiency and cost-effectiveness. The combination of data science, cloud deployment options and associated economics allows to burst out heterogeneously, exploring the full potential of cloud-based services.


page 1

page 2

page 3

page 4


PDFFlow: hardware accelerating parton density access

We present PDFFlow, a new software for fast evaluation of parton distrib...

ContainerStress: Autonomous Cloud-Node Scoping Framework for Big-Data ML Use Cases

Deploying big-data Machine Learning (ML) services in a cloud environment...

Reduced Precision Strategies for Deep Learning: A High Energy Physics Generative Adversarial Network Use Case

Deep learning is finding its way into high energy physics by replacing t...

Tensor Processing Units for Financial Monte Carlo

Monte Carlo methods are core to many routines in quantitative finance su...

CosmoFlow: Using Deep Learning to Learn the Universe at Scale

Deep learning is a promising tool to determine the physical model that d...

PDFFlow: parton distribution functions on GPU

We present PDFFlow, a new software for fast evaluation of parton distrib...

How Can We Train Deep Learning Models Across Clouds and Continents? An Experimental Study

Training deep learning models in the cloud or on dedicated hardware is e...

Please sign up or login with your details

Forgot password? Click here to reset