Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks

09/19/2017
by   Julian Faraone, et al.
0

A low precision deep neural network training technique for producing sparse, ternary neural networks is presented. The technique incorporates hard- ware implementation costs during training to achieve significant model compression for inference. Training involves three stages: network training using L2 regularization and a quantization threshold regularizer, quantization pruning, and finally retraining. Resulting networks achieve improved accuracy, reduced memory footprint and reduced computational complexity compared with conventional methods, on MNIST and CIFAR10 datasets. Our networks are up to 98 sparse and 5 & 11 times smaller than equivalent binary and ternary models, translating to significant resource and speed benefits for hardware implementations.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset