Adaptive Estimators Show Information Compression in Deep Neural Networks

by   Ivan Chelombiev, et al.
University of Bristol

To improve how neural networks function it is crucial to understand their learning process. The information bottleneck theory of deep learning proposes that neural networks achieve good generalization by compressing their representations to disregard information that is not relevant to the task. However, empirical evidence for this theory is conflicting, as compression was only observed when networks used saturating activation functions. In contrast, networks with non-saturating activation functions achieved comparable levels of task performance but did not show compression. In this paper we developed more robust mutual information estimation techniques, that adapt to hidden activity of neural networks and produce more sensitive measurements of activations from all functions, especially unbounded functions. Using these adaptive estimation techniques, we explored compression in networks with a range of different activation functions. With two improved methods of estimation, firstly, we show that saturation of the activation function is not required for compression, and the amount of compression varies between different activation functions. We also find that there is a large amount of variation in compression between different network initializations. Secondary, we see that L2 regularization leads to significantly increased compression, while preventing overfitting. Finally, we show that only compression of the last layer is positively correlated with generalization.


page 1

page 2

page 3

page 4


Complexity of Neural Network Training and ETR: Extensions with Effectively Continuous Functions

We study the complexity of the problem of training neural networks defin...

Understanding Learning Dynamics of Binary Neural Networks via Information Bottleneck

Compact neural networks are essential for affordable and power efficient...

Analytical aspects of non-differentiable neural networks

Research in computational deep learning has directed considerable effort...

Justices for Information Bottleneck Theory

This study comes as a timely response to mounting criticism of the infor...

Lossy compression of multidimensional medical images using sinusoidal activation networks: an evaluation study

In this work, we evaluate how neural networks with periodic activation f...

Function Approximation with Randomly Initialized Neural Networks for Approximate Model Reference Adaptive Control

Classical results in neural network approximation theory show how arbitr...

Low Curvature Activations Reduce Overfitting in Adversarial Training

Adversarial training is one of the most effective defenses against adver...

Please sign up or login with your details

Forgot password? Click here to reset