Directed-Weighting Group Lasso for Eltwise Blocked CNN Pruning

10/21/2019
by   Ke Zhan, et al.
0

Eltwise layer is a commonly used structure in the multi-branch deep learning network. In a filter-wise pruning procedure, due to the specific operation of the eltwise layer, all its previous convolutional layers should vote for which filters by index should be pruned. Since only an intersection of the voted filters is pruned, the compression rate is limited. This work proposes a method called Directed-Weighting Group Lasso (DWGL), which enforces an index-wise incremental (directed) coefficient on the filterlevel group lasso items, so that the low index filters getting high activation tend to be kept while the high index ones tend to be pruned. When using DWGL, much fewer filters are retained during the voting process and the compression rate can be boosted. The paper test the proposed method on the ResNet series networks. On CIFAR-10, it achieved a 75.34 and a 52.06 ImageNet, it achieved a 53 increment, speeding up the network by 2.23 times. Furthermore, it achieved a 75 network by 4 times.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset