An Improved Trade-off Between Accuracy and Complexity with Progressive Gradient Pruning

by   Le Thanh Nguyen-Meidine, et al.

Although deep neural networks (NNs) have achieved state-of-the-art accuracy in many visual recognition tasks ,the growing computational complexity and energy consumption of networks remains an issue, especially for applications on platforms with limited resources and requiring real-time processing. Channel pruning techniques have recently shown promising results for the compression of convolutional NNs (CNNs). However, these techniques can result in low accuracy and complex optimisations because some only prune after training CNNs, while others prune from scratch during training by integrating sparsity constraints or modifying the loss function. The progressive soft filter pruning technique provides greater training efficiency, but its soft pruning strategy does no thandle the backward pass which is needed for better optimization. In this paper, a new Progressive Gradient Pruning (PGP) technique is proposed for iterative channel pruning during training. It relies on a criterion that measures the change in channel weights that improves existing progressive pruning, and on an effective hard and soft pruning strategies to adapt momentum tensors during the backward propagation pass. Experimental results obtained after training various CNNs on the MNIST and CIFAR10 datasets indicate that the PGP technique canachieve a better tradeoff between classification accuracy and network (time and memory) complexity than state-of-the-art channel pruning techniques


page 1

page 2

page 3

page 4


A Survey of Pruning Methods for Efficient Person Re-identification Across Domains

Recent years have witnessed a substantial increase in the deep learning ...

Progressive Deep Neural Networks Acceleration via Soft Filter Pruning

This paper proposed a Progressive Soft Filter Pruning method (PSFP) to p...

Progressive Channel-Shrinking Network

Currently, salience-based channel pruning makes continuous breakthroughs...

Soft Masking for Cost-Constrained Channel Pruning

Structured channel pruning has been shown to significantly accelerate in...

Holistic Filter Pruning for Efficient Deep Neural Networks

Deep neural networks (DNNs) are usually over-parameterized to increase t...

Back to Basics: Efficient Network Compression via IMP

Network pruning is a widely used technique for effectively compressing D...

Structurally Sparsified Backward Propagation for Faster Long Short-Term Memory Training

Exploiting sparsity enables hardware systems to run neural networks fast...

Please sign up or login with your details

Forgot password? Click here to reset