Finito: A Faster, Permutable Incremental Gradient Method for Big Data Problems

07/10/2014
by   Aaron J. Defazio, et al.
0

Recent advances in optimization theory have shown that smooth strongly convex finite sums can be minimized faster than by treating them as a black box "batch" problem. In this work we introduce a new method in this class with a theoretical convergence rate four times faster than existing methods, for sums with sufficiently many terms. This method is also amendable to a sampling without replacement scheme that in practice gives further speed-ups. We give empirical results showing state of the art performance.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset