Understanding the unstable convergence of gradient descent

04/03/2022
by   Kwangjun Ahn, et al.
0

Most existing analyses of (stochastic) gradient descent rely on the condition that for L-smooth cost, the step size is less than 2/L. However, many works have observed that in machine learning applications step sizes often do not fulfill this condition, yet (stochastic) gradient descent converges, albeit in an unstable manner. We investigate this unstable convergence phenomenon from first principles, and elucidate key causes behind it. We also identify its main characteristics, and how they interrelate, offering a transparent view backed by both theory and experiments.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset