NN-LUT: Neural Approximation of Non-Linear Operations for Efficient Transformer Inference

12/03/2021
by   Joonsang Yu, et al.
0

Non-linear operations such as GELU, Layer normalization, and Softmax are essential yet costly building blocks of Transformer models. Several prior works simplified these operations with look-up tables or integer computations, but such approximations suffer inferior accuracy or considerable hardware cost with long latency. This paper proposes an accurate and hardware-friendly approximation framework for efficient Transformer inference. Our framework employs a simple neural network as a universal approximator with its structure equivalently transformed into a LUT. The proposed framework called NN-LUT can accurately replace all the non-linear operations in popular BERT models with significant reductions in area, power consumption, and latency.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset