The NTK approximation is valid for longer than you think

05/22/2023
by   Enric Boix-Adserà, et al.
0

We study when the neural tangent kernel (NTK) approximation is valid for training a model with the square loss. In the lazy training setting of Chizat et al. 2019, we show that rescaling the model by a factor of α = O(T) suffices for the NTK approximation to be valid until training time T. Our bound is tight and improves on the previous bound of Chizat et al. 2019, which required a larger rescaling factor of α = O(T^2).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset