Modify Training Directions in Function Space to Reduce Generalization Error

07/25/2023
by   Yi Yu, et al.
0

We propose theoretical analyses of a modified natural gradient descent method in the neural network function space based on the eigendecompositions of neural tangent kernel and Fisher information matrix. We firstly present analytical expression for the function learned by this modified natural gradient under the assumptions of Gaussian distribution and infinite width limit. Thus, we explicitly derive the generalization error of the learned neural network function using theoretical methods from eigendecomposition and statistics theory. By decomposing of the total generalization error attributed to different eigenspace of the kernel in function space, we propose a criterion for balancing the errors stemming from training set and the distribution discrepancy between the training set and the true data. Through this approach, we establish that modifying the training direction of the neural network in function space leads to a reduction in the total generalization error. Furthermore, We demonstrate that this theoretical framework is capable to explain many existing results of generalization enhancing methods. These theoretical results are also illustrated by numerical examples on synthetic data.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/20/2018

Neural Tangent Kernel: Convergence and Generalization in Neural Networks

At initialization, artificial neural networks (ANNs) are equivalent to G...
research
12/30/2019

Disentangling trainability and generalization in deep learning

A fundamental goal in deep learning is the characterization of trainabil...
research
05/27/2019

Quantifying the generalization error in deep learning in terms of data distribution and neural network smoothness

The accuracy of deep learning, i.e., deep neural networks, can be charac...
research
10/08/2021

Neural Tangent Kernel Eigenvalues Accurately Predict Generalization

Finding a quantitative theory of neural network generalization has long ...
research
09/02/2023

On the training and generalization of deep operator networks

We present a novel training method for deep operator networks (DeepONets...
research
06/25/2018

Stochastic natural gradient descent draws posterior samples in function space

Natural gradient descent (NGD) minimises the cost function on a Riemanni...

Please sign up or login with your details

Forgot password? Click here to reset