Post-Training Quantization for Energy Efficient Realization of Deep Neural Networks

10/14/2022
by   Cecilia Latotzke, et al.
0

The biggest challenge for the deployment of Deep Neural Networks (DNNs) close to the generated data on edge devices is their size, i.e., memory footprint and computational complexity. Both are significantly reduced with quantization. With the resulting lower word-length, the energy efficiency of DNNs increases proportionally. However, lower word-length typically causes accuracy degradation. To counteract this effect, the quantized DNN is retrained. Unfortunately, training costs up to 5000x more energy than the inference of the quantized DNN. To address this issue, we propose a post-training quantization flow without the need for retraining. For this, we investigated different quantization options. Furthermore, our analysis systematically assesses the impact of reduced word-lengths of weights and activations revealing a clear trend for the choice of word-length. Both aspects have not been systematically investigated so far. Our results are independent of the depth of the DNNs and apply to uniform quantization, allowing fast quantization of a given pre-trained DNN. We excel state-of-the-art for 6 bit by 2.2 ImageNet. Without retraining, our quantization to 8 bit surpasses floating-point accuracy.

READ FULL TEXT

page 1

page 5

research
06/15/2021

Development of Quantized DNN Library for Exact Hardware Emulation

Quantization is used to speed up execution time and save power when runn...
research
07/28/2021

MARViN – Multiple Arithmetic Resolutions Vacillating in Neural Networks

Quantization is a technique for reducing deep neural networks (DNNs) tra...
research
04/08/2022

Characterizing and Understanding the Behavior of Quantized Models for Reliable Deployment

Deep Neural Networks (DNNs) have gained considerable attention in the pa...
research
03/28/2022

REx: Data-Free Residual Quantization Error Expansion

Deep neural networks (DNNs) are nowadays ubiquitous in the computer visi...
research
12/24/2020

FracTrain: Fractionally Squeezing Bit Savings Both Temporally and Spatially for Efficient DNN Training

Recent breakthroughs in deep neural networks (DNNs) have fueled a tremen...
research
08/13/2018

A Survey on Methods and Theories of Quantized Neural Networks

Deep neural networks are the state-of-the-art methods for many real-worl...
research
11/05/2018

ReLeQ: A Reinforcement Learning Approach for Deep Quantization of Neural Networks

Despite numerous state-of-the-art applications of Deep Neural Networks (...

Please sign up or login with your details

Forgot password? Click here to reset