Practical Network Acceleration with Tiny Sets
Network compression is effective in accelerating the inference of deep neural networks, but often requires finetuning with all the training data to recover from the accuracy loss. It is impractical in some applications, however, due to data privacy issues or constraints in compression time budget. To deal with the above issues, we propose a method named PRACTISE to accelerate the network with tiny sets of training images. By considering both the pruned part and the unpruned part of a compressed model, PRACTISE alleviates layer-wise error accumulation, which is the main drawback of previous methods. Furthermore, existing methods are confined to few compression schemes, have limited speedup in terms of latency, and are unstable. In contrast, PRACTISE is stable, fast to train, versatile to handle various compression schemes, and achieves low latency. We also propose that dropping entire blocks is a better way than existing compression schemes when only tiny sets of training data are available. Extensive experiments demonstrate that PRACTISE achieves much higher accuracy and more stable models than state-of-the-art methods.
READ FULL TEXT