Layer Dynamics of Linearised Neural Nets

by   Saurav Basu, et al.

Despite the phenomenal success of deep learning in recent years, there remains a gap in understanding the fundamental mechanics of neural nets. More research is focussed on handcrafting complex and larger networks, and the design decisions are often ad-hoc and based on intuition. Some recent research has aimed to demystify the learning dynamics in neural nets by attempting to build a theory from first principles, such as characterising the non-linear dynamics of specialised linear deep neural nets (such as orthogonal networks). In this work, we expand and derive properties of learning dynamics respected by general multi-layer linear neural nets. Although an over-parameterisation of a single layer linear network, linear multi-layer neural nets offer interesting insights that explain how learning dynamics proceed in small pockets of the data space. We show in particular that multiple layers in linear nets grow at approximately the same rate, and there are distinct phases of learning with markedly different layer growth. We then apply a linearisation process to a general RelU neural net and show how nonlinearity breaks down the growth symmetry observed in liner neural nets. Overall, our work can be viewed as an initial step in building a theory for understanding the effect of layer design on the learning dynamics from first principles.


page 1

page 2

page 3

page 4


The Dynamics of Learning: A Random Matrix Approach

Understanding the learning dynamics of neural networks is one of the key...

Keynote: Small Neural Nets Are Beautiful: Enabling Embedded Systems with Small Deep-Neural-Network Architectures

Over the last five years Deep Neural Nets have offered more accurate sol...

Forecasting the levels of disability in the older population of England: Application of neural nets

Deep neural networks are powerful tools for modelling non-linear pattern...

From Deep to Shallow: Transformations of Deep Rectifier Networks

In this paper, we introduce transformations of deep rectifier networks, ...

Deep Big Simple Neural Nets Excel on Handwritten Digit Recognition

Good old on-line back-propagation for plain multi-layer perceptrons yiel...

Emergence of Network Motifs in Deep Neural Networks

Network science can offer fundamental insights into the structural and f...

The Neural Race Reduction: Dynamics of Abstraction in Gated Networks

Our theoretical understanding of deep learning has not kept pace with it...

Please sign up or login with your details

Forgot password? Click here to reset