QUIDAM: A Framework for Quantization-Aware DNN Accelerator and Model Co-Exploration

by   Ahmet Inci, et al.

As the machine learning and systems communities strive to achieve higher energy-efficiency through custom deep neural network (DNN) accelerators, varied precision or quantization levels, and model compression techniques, there is a need for design space exploration frameworks that incorporate quantization-aware processing elements into the accelerator design space while having accurate and fast power, performance, and area models. In this work, we present QUIDAM, a highly parameterized quantization-aware DNN accelerator and model co-exploration framework. Our framework can facilitate future research on design space exploration of DNN accelerators for various design choices such as bit precision, processing element type, scratchpad sizes of processing elements, global buffer size, number of total processing elements, and DNN configurations. Our results show that different bit precisions and processing element types lead to significant differences in terms of performance per area and energy. Specifically, our framework identifies a wide range of design points where performance per area and energy varies more than 5x and 35x, respectively. With the proposed framework, we show that lightweight processing elements achieve on par accuracy results and up to 5.7x more performance per area and energy improvement when compared to the best INT16 based implementation. Finally, due to the efficiency of the pre-characterized power, performance, and area models, QUIDAM can speed up the design exploration process by 3-4 orders of magnitude as it removes the need for expensive synthesis and characterization of each design.


page 1

page 2

page 3

page 4


QADAM: Quantization-Aware DNN Accelerator Modeling for Pareto-Optimality

As the machine learning and systems communities strive to achieve higher...

QAPPA: Quantization-Aware Power, Performance, and Area Modeling of DNN Accelerators

As the machine learning and systems community strives to achieve higher ...

BARVINN: Arbitrary Precision DNN Accelerator Controlled by a RISC-V CPU

We present a DNN accelerator that allows inference at arbitrary precisio...

ZigZag: A Memory-Centric Rapid DNN Accelerator Design Space Exploration Framework

Building efficient embedded deep learning systems requires a tight co-de...

The energy analyst's need for a standard that interprets the Global Information Infrastructure in the Metro Area

In this paper, we argue that the energy analyst's understanding of the m...

An Algorithm-Hardware Co-design Framework to Overcome Imperfections of Mixed-signal DNN Accelerators

In recent years, processing in memory (PIM) based mixedsignal designs ha...

DeepBurning-MixQ: An Open Source Mixed-Precision Neural Network Accelerator Design Framework for FPGAs

Mixed-precision neural networks (MPNNs) that enable the use of just enou...

Please sign up or login with your details

Forgot password? Click here to reset