Network-size independent covering number bounds for deep networks

11/02/2017
by   Mayank Kabra, et al.
0

We give a covering number bound for deep learning networks that is independent of the size of the network. The key for the simple analysis is that for linear classifiers, rotating the data doesn't affect the covering number. Thus, we can ignore the rotation part of each layer's linear transformation, and get the covering number bound by concentrating on the scaling part.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset