Neural networks as Interacting Particle Systems: Asymptotic convexity of the Loss Landscape and Universal Scaling of the Approximation Error

05/02/2018
by   Grant M. Rotskoff, et al.
0

Neural networks, a central tool in machine learning, have demonstrated remarkable, high fidelity performance on image recognition and classification tasks. These successes evince an ability to accurately represent high dimensional functions, potentially of great use in computational and applied mathematics. That said, there are few rigorous results about the representation error and trainability of neural networks, as well as how they scale with the network size. Here we characterize both the error and scaling by reinterpreting the standard optimization algorithm used in machine learning applications, stochastic gradient descent, as the evolution of a particle system with interactions governed by a potential related to the objective or "loss" function used to train the network. We show that, when the number n of parameters is large, the empirical distribution of the particles descends on a convex landscape towards a minimizer at a rate independent of n. We establish a Law of Large Numbers and a Central Limit Theorem for the empirical distribution, which together show that the approximation error of the network universally scales as o(n^-1). Remarkably, these properties do not depend on the dimensionality of the domain of the function that we seek to represent. Our analysis also quantifies the scale and nature of the noise introduced by stochastic gradient descent and provides guidelines for the step size and batch size to use when training a neural network. We illustrate our findings on examples in which we train neural network to learn the energy function of the continuous 3-spin model on the sphere. The approximation error scales as our analysis predicts in as high a dimension as d=25.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/02/2021

Universal scaling laws in the gradient descent training of neural networks

Current theoretical results on optimization trajectories of neural netwo...
research
10/02/2020

Deep FPF: Gain function approximation in high-dimensional setting

In this paper, we present a novel approach to approximate the gain funct...
research
10/05/2022

Functional Central Limit Theorem and Strong Law of Large Numbers for Stochastic Gradient Langevin Dynamics

We study the mixing properties of an important optimization algorithm of...
research
08/28/2018

Mean Field Analysis of Neural Networks: A Central Limit Theorem

Machine learning has revolutionized fields such as image, text, and spee...
research
08/25/2019

Theoretical Issues in Deep Networks: Approximation, Optimization and Generalization

While deep learning is successful in a number of applications, it is not...
research
09/28/2018

Fluctuation-dissipation relations for stochastic gradient descent

The notion of the stationary equilibrium ensemble has played a central r...
research
03/22/2018

Positive-unlabeled convolutional neural networks for particle picking in cryo-electron micrographs

Cryo-electron microscopy (cryoEM) is fast becoming the preferred method ...

Please sign up or login with your details

Forgot password? Click here to reset