Doing the impossible: Why neural networks can be trained at all

by   Nathan O. Hodas, et al.

As deep neural networks grow in size, from thousands to millions to billions of weights, the performance of those networks becomes limited by our ability to accurately train them. A common naive question arises: if we have a system with billions of degrees of freedom, don't we also need billions of samples to train it? Of course, the success of deep learning indicates that reliable models can be learned with reasonable amounts of data. Similar questions arise in protein folding, spin glasses and biological neural networks. With effectively infinite potential folding/spin/wiring configurations, how does the system find the precise arrangement that leads to useful and robust results? Simple sampling of the possible configurations until an optimal one is reached is not a viable option even if one waited for the age of the universe. On the contrary, there appears to be a mechanism in the above phenomena that forces them to achieve configurations that live on a low-dimensional manifold, avoiding the curse of dimensionality. In the current work we use the concept of mutual information between successive layers of a deep neural network to elucidate this mechanism and suggest possible ways of exploiting it to accelerate training. We show that adding structure to the neural network that enforces higher mutual information between layers speeds training and leads to more accurate results. High mutual information between layers implies that the effective number of free parameters is exponentially smaller than the raw number of tunable weights.


page 1

page 2

page 3

page 4


Learning Not to Learn: Training Deep Neural Networks with Biased Data

We propose a novel regularization algorithm to train deep neural network...

Data Privacy and Utility Trade-Off Based on Mutual Information Neural Estimator

In the era of big data and the Internet of Things (IoT), data owners nee...

An Information-Theoretic View for Deep Learning

Deep learning has transformed the computer vision, natural language proc...

Entropy and mutual information in models of deep neural networks

We examine a class of deep learning models with a tractable method to co...

InfoShape: Task-Based Neural Data Shaping via Mutual Information

The use of mutual information as a tool in private data sharing has rema...

Malicious Network Traffic Detection via Deep Learning: An Information Theoretic View

The attention that deep learning has garnered from the academic communit...

A model is worth tens of thousands of examples

Traditional signal processing methods relying on mathematical data gener...

Please sign up or login with your details

Forgot password? Click here to reset