Exploring Gradient Flow Based Saliency for DNN Model Compression

10/24/2021
by   Xinyu Liu, et al.
0

Model pruning aims to reduce the deep neural network (DNN) model size or computational overhead. Traditional model pruning methods such as l-1 pruning that evaluates the channel significance for DNN pay too much attention to the local analysis of each channel and make use of the magnitude of the entire feature while ignoring its relevance to the batch normalization (BN) and ReLU layer after each convolutional operation. To overcome these problems, we propose a new model pruning method from a new perspective of gradient flow in this paper. Specifically, we first theoretically analyze the channel's influence based on Taylor expansion by integrating the effects of BN layer and ReLU activation function. Then, the incorporation of the first-order Talyor polynomial of the scaling parameter and the shifting parameter in the BN layer is suggested to effectively indicate the significance of a channel in a DNN. Comprehensive experiments on both image classification and image denoising tasks demonstrate the superiority of the proposed novel theory and scheme. Code is available at https://github.com/CityU-AIM-Group/GFBS.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset