On the Saturation Phenomenon of Stochastic Gradient Descent for Linear Inverse Problems

10/21/2020
by   Bangti Jin, et al.
0

Stochastic gradient descent (SGD) is a promising method for solving large-scale inverse problems, due to its excellent scalability with respect to data size. The current mathematical theory in the lens of regularization theory predicts that SGD with a polynomially decaying stepsize schedule may suffer from an undesirable saturation phenomenon, i.e., the convergence rate does not further improve with the solution regularity index if it is beyond a certain range. In this work, we present a refined convergence rate analysis of SGD, and prove that saturation actually does not occur if the initial stepsize of the schedule is sufficiently small. Several numerical experiments are provided to complement the analysis.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset