Accelerating Deep Convolutional Networks using low-precision and sparsity

10/02/2016
by   Ganesh Venkatesh, et al.
0

We explore techniques to significantly improve the compute efficiency and performance of Deep Convolution Networks without impacting their accuracy. To improve the compute efficiency, we focus on achieving high accuracy with extremely low-precision (2-bit) weight networks, and to accelerate the execution time, we aggressively skip operations on zero-values. We achieve the highest reported accuracy of 76.6 classification challenge with low-precision network[github release of the source code coming soon] while reducing the compute requirement by 3x compared to a full-precision network that achieves similar accuracy. Furthermore, to fully exploit the benefits of our low-precision networks, we build a deep learning accelerator core, dLAC, that can achieve up to 1 TFLOP/mm^2 equivalent for single-precision floating-point operations ( 2 TFLOP/mm^2 for half-precision).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset