AUTOSPARSE: Towards Automated Sparse Training of Deep Neural Networks

04/14/2023
by   Abhisek Kundu, et al.
0

Sparse training is emerging as a promising avenue for reducing the computational cost of training neural networks. Several recent studies have proposed pruning methods using learnable thresholds to efficiently explore the non-uniform distribution of sparsity inherent within the models. In this paper, we propose Gradient Annealing (GA), where gradients of masked weights are scaled down in a non-linear manner. GA provides an elegant trade-off between sparsity and accuracy without the need for additional sparsity-inducing regularization. We integrated GA with the latest learnable pruning methods to create an automated sparse training algorithm called AutoSparse, which achieves better accuracy and/or training/inference FLOPS reduction than existing learnable pruning methods for sparse ResNet50 and MobileNetV1 on ImageNet-1K: AutoSparse achieves (2x, 7x) reduction in (training,inference) FLOPS for ResNet50 on ImageNet at 80 sparse-to-sparse SotA method MEST (uniform sparsity) for 80 with similar accuracy, where MEST uses 12 inference FLOPS.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset