BrainSlug: Transparent Acceleration of Deep Learning Through Depth-First Parallelism

04/23/2018
by   Nicolas Weber, et al.
0

Neural network frameworks such as PyTorch and TensorFlow are the workhorses of numerous machine learning applications ranging from object recognition to machine translation. While these frameworks are versatile and straightforward to use, the training of and inference in deep neural networks is resource (energy, compute, and memory) intensive. In contrast to recent works focusing on algorithmic enhancements, we introduce BrainSlug, a framework that transparently accelerates neural network workloads by changing the default layer-by-layer processing to a depth-first approach, reducing the amount of data required by the computations and thus improving the performance of the available hardware caches. BrainSlug achieves performance improvements of up to 41.1 user as they do not require hardware changes and only need tiny adjustments to the software.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset