α-Stable convergence of heavy-tailed infinitely-wide neural networks
We consider infinitely-wide multi-layer perceptrons (MLPs) which are limits of standard deep feed-forward neural networks. We assume that, for each layer, the weights of an MLP are initialized with i.i.d. samples from either a light-tailed (finite variance) or heavy-tailed distribution in the domain of attraction of a symmetric α-stable distribution, where α∈(0,2] may depend on the layer. For the bias terms of the layer, we assume i.i.d. initializations with a symmetric α-stable distribution having the same α parameter of that layer. We then extend a recent result of Favaro, Fortini, and Peluchetti (2020), to show that the vector of pre-activation values at all nodes of a given hidden layer converges in the limit, under a suitable scaling, to a vector of i.i.d. random variables with symmetric α-stable distributions.
READ FULL TEXT