A Capacity Scaling Law for Artificial Neural Networks

08/20/2017
by   Gerald Friedland, et al.
0

By assuming an ideal neural network with gating functions handling the worst case data, we derive the calculation of two critical numbers predicting the behavior of perceptron networks. First, we derive the calculation of what we call the lossless memory (LM) dimension. The LM dimension is a generalization of the Vapnik-Chervonenkis (VC) dimension that avoids structured data and therefore provides an upper bound for perfectly fitting any training data. Second, we derive what we call the MacKay (MK) dimension. This limit indicates necessary forgetting, that is, the lower limit for most generalization uses of the network. Our derivations are performed by embedding the ideal network into Shannon's communication model which allows to interpret the two points as capacities measured in bits. We validate our upper bounds with repeatable experiments using different network configurations, diverse implementations, varying activation functions, and several learning algorithms. The bottom line is that the two capacity points scale strictly linear with the number of weights. Among other practical applications, our result allows network implementations with gating functions (e. g., sigmoid or rectified linear units) to be evaluated against our upper limit independent of a concrete task.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset