Loom: Exploiting Weight and Activation Precisions to Accelerate Convolutional Neural Networks

06/23/2017
by   Sayeh Sharify, et al.
0

Loom (LM), a hardware inference accelerator for Convolutional Neural Networks (CNNs) is presented. In LM every bit of data precision that can be saved translates to proportional performance gains. Specifically, for convolutional layers LM's execution time scales inversely proportionally with the precisions of both weights and activations. For fully-connected layers LM's performance scales inversely proportionally with the precision of the weights. The LM accelerator targets area constrained System-on-a-Chip designs such as those found on mobile devices that cannot afford the multi-megabyte buffers that would be needed to store each layer on-chip during processing. Experiments on image classification CNNs show that on average across all networks studied and assuming that weights are supplied via a High Bandwidth Memory v2 (HBM2) interface, a configuration of LM outperforms a state-of-the-art bit-parallel accelerator [1] by 2.34x without any loss in accuracy while being 2.23x more energy efficient. Moreover, LM can trade-off accuracy for additional improvements in execution performance and energy efficiency.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset