DropPruning for Model Compression

12/05/2018
by   Haipeng Jia, et al.
0

Deep neural networks (DNNs) have dramatically achieved great success on a variety of challenging tasks. However, most of the successful DNNs are structurally so complex, leading to much storage requirement and floating-point operation. This paper proposes a novel technique, named Drop Pruning, to compress the DNNs by pruning the weights from a dense high-accuracy baseline model without accuracy loss. Drop Pruning also falls into the standard iterative prune-retrain procedure, where a drop strategy exists at each pruning step: drop out, stochastic deleting some unimportant weights and drop in, stochastic recovering some pruned weights. Drop out and drop in are supposed to handle the two drawbacks of the traditional pruning methods: local importance judgment and irretrievable pruning process, respectively. The suitable choosing of drop probabilities can decrease the model size during pruning process and lead it to flow to the target sparsity. Drop Pruning also has some similar spirits with dropout, a stochastic algorithm in Integer Optimization and the Dense-Sparse-Dense training technique. Drop Pruning can significantly reducing overfitting while compressing the model. Experimental results demonstrates that Drop Pruning can achieve the state-of-the-art performance on many benchmark pruning tasks, about 11.1× compression of VGG-16 on CIFAR10 and 14.3× compression of LeNet-5 on MNIST without accuracy loss, which may provide some new insights into the aspect of model compression.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset