SGD Through the Lens of Kolmogorov Complexity

11/10/2021
by   Gregory Schwartzman, et al.
0

We prove that stochastic gradient descent (SGD) finds a solution that achieves (1-ϵ) classification accuracy on the entire dataset. We do so under two main assumptions: (1. Local progress) There is consistent improvement of the model accuracy over batches. (2. Models compute simple functions) The function computed by the model is simple (has low Kolmogorov complexity). Intuitively, the above means that local progress of SGD implies global progress. Assumption 2 trivially holds for underparameterized models, hence, our work gives the first convergence guarantee for general, underparameterized models. Furthermore, this is the first result which is completely model agnostic - we don't require the model to have any specific architecture or activation function, it may not even be a neural network. Our analysis makes use of the entropy compression method, which was first introduced by Moser and Tardos in the context of the Lovász local lemma.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset