Rotation Invariant Quantization for Model Compression

03/03/2023
by   Joseph Kampeas, et al.
0

Post-training Neural Network (NN) model compression is an attractive approach for deploying large, memory-consuming models on devices with limited memory resources. In this study, we investigate the rate-distortion tradeoff for NN model compression. First, we suggest a Rotation-Invariant Quantization (RIQ) technique that utilizes a single parameter to quantize the entire NN model, yielding a different rate at each layer, i.e., mixed-precision quantization. Then, we prove that our rotation-invariant approach is optimal in terms of compression. We rigorously evaluate RIQ and demonstrate its capabilities on various models and tasks. For example, RIQ facilitates × 19.4 and × 52.9 compression ratios on pre-trained VGG dense and pruned models, respectively, with <0.4% accuracy degradation. Code: <https://github.com/ehaleva/RIQ>.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset