On the Learnability of Deep Random Networks

04/08/2019
by   Abhimanyu Das, et al.
0

In this paper we study the learnability of deep random networks from both theoretical and practical points of view. On the theoretical front, we show that the learnability of random deep networks with sign activation drops exponentially with its depth. On the practical front, we find that the learnability drops sharply with depth even with the state-of-the-art training methods, suggesting that our stylized theoretical results are closer to reality.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/03/2019

Deep ReLU Networks Have Surprisingly Few Activation Patterns

The success of deep networks has been attributed in part to their expres...
research
03/08/2018

Some Approximation Bounds for Deep Networks

In this paper we introduce new bounds on the approximation of functions ...
research
01/29/2019

On the Expressive Power of Deep Fully Circulant Neural Networks

In this paper, we study deep fully circulant neural networks, that is de...
research
06/16/2016

Exponential expressivity in deep neural networks through transient chaos

We combine Riemannian geometry with the mean field theory of high dimens...
research
11/04/2016

Deep Information Propagation

We study the behavior of untrained neural networks whose weights and bia...
research
02/19/2020

Span Recovery for Deep Neural Networks with Applications to Input Obfuscation

The tremendous success of deep neural networks has motivated the need to...
research
02/21/2021

Deep ReLU Networks Preserve Expected Length

Assessing the complexity of functions computed by a neural network helps...

Please sign up or login with your details

Forgot password? Click here to reset