On the complexity of convex inertial proximal algorithms

01/23/2018
by   Tao Sun, et al.
0

The inertial proximal gradient algorithm is efficient for the composite optimization problem. Recently, the convergence of a special inertial proximal gradient algorithm under strong convexity has been also studied. In this paper, we present more novel convergence complexity results, especially on the convergence rates of the function values. The non-ergodic O(1/k) rate is proved for inertial proximal gradient algorithm with constant stepzise when the objective function is coercive. When the objective function fails to promise coercivity, we prove the sublinear rate with diminishing inertial parameters. When the function satisfies some condition (which is much weaker than the strong convexity), the linear convergence is proved with much larger and general stepsize than previous literature. We also extend our results to the multi-block version and present the computational complexity. Both cyclic and stochastic index selection strategies are considered.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset