Exploring the Properties and Evolution of Neural Network Eigenspaces during Training

06/17/2021
by   Mats L. Richter, et al.
0

In this work we explore the information processing inside neural networks using logistic regression probes <cit.> and the saturation metric <cit.>. We show that problem difficulty and neural network capacity affect the predictive performance in an antagonistic manner, opening the possibility of detecting over- and under-parameterization of neural networks for a given task. We further show that the observed effects are independent from previously reported pathological patterns like the “tail pattern” described in <cit.>. Finally we are able to show that saturation patterns converge early during training, allowing for a quicker cycle time during analysis

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro