Probabilistic bounds on data sensitivity in deep rectifier networks

07/13/2020
by   Blaine Rister, et al.
0

Neuron death is a complex phenomenon with implications for model trainability, but until recently it was measured only empirically. Recent articles have claimed that, as the depth of a rectifier neural network grows to infinity, the probability of finding a valid initialization decreases to zero. In this work, we provide a simple and rigorous proof of that result. Then, we show what happens when the width of each layer grows simultaneously with the depth. We derive both upper and lower bounds on the probability that a ReLU network is initialized to a trainable point, as a function of model hyperparameters. Contrary to previous claims, we show that it is possible to increase the depth of a network indefinitely, so long as the width increases as well. Furthermore, our bounds are asymptotically tight under reasonable assumptions: first, the upper bound coincides with the true probability for a single-layer network with the largest possible input set. Second, the true probability converges to our lower bound when the network width and depth both grow without limit. Our proof is based on the striking observation that very deep rectifier networks concentrate all outputs towards a single eigenvalue, in the sense that their normalized output variance goes to zero regardless of the network width. Finally, we develop a practical sign flipping scheme which guarantees with probability one that for a k-layer network, the ratio of living training data points is at least 2^-k. We confirm our results with numerical simulations, suggesting that the actual improvement far exceeds the theoretical minimum. We also discuss how neuron death provides a theoretical interpretation for various network design choices such as batch normalization, residual layers and skip connections, and could inform the design of very deep neural networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/01/2023

Width and Depth Limits Commute in Residual Networks

We show that taking the width and depth to infinity in a deep neural net...
research
07/18/2023

How Many Neurons Does it Take to Approximate the Maximum?

We study the size of a neural network needed to approximate the maximum ...
research
10/29/2020

Over-parametrized neural networks as under-determined linear systems

We draw connections between simple neural networks and under-determined ...
research
02/24/2023

Lower Bounds on the Depth of Integral ReLU Neural Networks via Lattice Polytopes

We prove that the set of functions representable by ReLU neural networks...
research
06/23/2021

Adversarial Examples in Multi-Layer Random ReLU Networks

We consider the phenomenon of adversarial examples in ReLU networks with...
research
06/15/2020

Feature Space Saturation during Training

We propose layer saturation - a simple, online-computable method for ana...
research
01/28/2021

Information contraction in noisy binary neural networks and its implications

Neural networks have gained importance as the machine learning models th...

Please sign up or login with your details

Forgot password? Click here to reset