Zero-Shot Dynamic Quantization for Transformer Inference

11/17/2022
by   Yousef El-Kurdi, et al.
0

We introduce a novel run-time method for significantly reducing the accuracy loss associated with quantizing BERT-like models to 8-bit integers. Existing methods for quantizing models either modify the training procedure,or they require an additional calibration step to adjust parameters that also requires a selected held-out dataset. Our method permits taking advantage of quantization without the need for these adjustments. We present results on several NLP tasks demonstrating the usefulness of this technique.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset