LUTNet: speeding up deep neural network inferencing via look-up tables

05/25/2019
by   Chai Wah Wu, et al.
0

We consider the use of look-up tables (LUT) to speed up and simplify the hardware implementation of a deep learning network for inferencing after weights have been successfully trained. The use of LUT replaces the matrix multiply and add operations with a small number of LUTs and addition operations resulting in a multiplier-less implementation. We compare the different tradeoffs of this approach in terms of accuracy versus LUT size and the number of operations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/12/2017

Reduction of Look Up Tables for Computation of Reciprocal of Square Roots

Among many existing algorithms, convergence methods are the most popular...
research
10/22/2019

Neural Network Training with Approximate Logarithmic Computations

The high computational complexity associated with training deep neural n...
research
12/03/2021

NN-LUT: Neural Approximation of Non-Linear Operations for Efficient Transformer Inference

Non-linear operations such as GELU, Layer normalization, and Softmax are...
research
12/06/2018

Generalizations of Laver tables

We shall generalize the notion of a Laver table to algebras which may ha...
research
03/19/2021

Relational Operations in FOLE

This paper discusses relational operations in the first-order logical en...
research
08/29/2022

A Probabilistic Model Revealing Shortcomings in Lua's Hybrid Tables

Lua (Ierusalimschy et al., 1996) is a well-known scripting language, pop...
research
10/03/2019

Algebraic statistics, tables, and networks: The Fienberg advantage

Stephen Fienberg's affinity for contingency table problems and reinterpr...

Please sign up or login with your details

Forgot password? Click here to reset