An Information Theory-inspired Strategy for Automatic Network Pruning

by   Xiawu Zheng, et al.

Despite superior performance on many computer vision tasks, deep convolution neural networks are well known to be compressed on devices that have resource constraints. Most existing network pruning methods require laborious human efforts and prohibitive computation resources, especially when the constraints are changed. This practically limits the application of model compression when the model needs to be deployed on a wide range of devices. Besides, existing methods are still challenged by the missing theoretical guidance. In this paper we propose an information theory-inspired strategy for automatic model compression. The principle behind our method is the information bottleneck theory, i.e., the hidden representation should compress information with each other. We thus introduce the normalized Hilbert-Schmidt Independence Criterion (nHSIC) on network activations as a stable and generalized indicator of layer importance. When a certain resource constraint is given, we integrate the HSIC indicator with the constraint to transform the architecture search problem into a linear programming problem with quadratic constraints. Such a problem is easily solved by a convex optimization method with a few seconds. We also provide a rigorous proof to reveal that optimizing the normalized HSIC simultaneously minimizes the mutual information between different layers. Without any search process, our method achieves better compression tradeoffs comparing to the state-of-the-art compression algorithms. For instance, with ResNet-50, we achieve a 45.3 ImageNet. Codes are avaliable at


page 3

page 8

page 10


Differentiable Mask Pruning for Neural Networks

Pruning of neural networks is one of the well-known and promising model ...

DAIS: Automatic Channel Pruning via Differentiable Annealing Indicator Search

The convolutional neural network has achieved great success in fulfillin...

You Only Compress Once: Towards Effective and Elastic BERT Compression via Exploit-Explore Stochastic Nature Gradient

Despite superior performance on various natural language processing task...

Towards Optimal Compression: Joint Pruning and Quantization

Compression of deep neural networks has become a necessary stage for opt...

Spectral-Pruning: Compressing deep neural network via spectral analysis

The model size of deep neural network is getting larger and larger to re...

Tetra-AML: Automatic Machine Learning via Tensor Networks

Neural networks have revolutionized many aspects of society but in the e...

AACP: Model Compression by Accurate and Automatic Channel Pruning

Channel pruning is formulated as a neural architecture search (NAS) prob...

Please sign up or login with your details

Forgot password? Click here to reset