Mean-field Analysis of Generalization Errors

06/20/2023
by   Gholamali Aminian, et al.
0

We propose a novel framework for exploring weak and L_2 generalization errors of algorithms through the lens of differential calculus on the space of probability measures. Specifically, we consider the KL-regularized empirical risk minimization problem and establish generic conditions under which the generalization error convergence rate, when training on a sample of size n, is 𝒪(1/n). In the context of supervised learning with a one-hidden layer neural network in the mean-field regime, these conditions are reflected in suitable integrability and regularity assumptions on the loss and activation functions.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset