Data-Dependent Coresets for Compressing Neural Networks with Applications to Generalization Bounds

04/15/2018
by   Cenk Baykal, et al.
1

The deployment of state-of-the-art neural networks containing millions of parameters to resource-constrained platforms may be prohibitive in terms of both time and space. In this work, we present an efficient coresets-based neural network compression algorithm that provably sparsifies the parameters of a trained feedforward neural network in a manner that approximately preserves the network's output. Our approach is based on an importance sampling scheme that judiciously defines a sampling distribution over the neural network parameters, and as a result, retains parameters of high importance while discarding redundant ones. Our method and analysis introduce an empirical notion of sensitivity and extend traditional coreset constructions to the application of compressing parameters. Our theoretical analysis establishes both instance-dependent and -independent bounds on the size of the resulting compressed neural network as a function of the user-specified tolerance and failure probability parameters. As a corollary to our practical compression algorithm, we obtain novel generalization bounds that may provide novel insights on the generalization properties of neural networks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset