Accelerated Proximal Stochastic Dual Coordinate Ascent for Regularized Loss Minimization

09/10/2013
by   Shai Shalev-Shwartz, et al.
0

We introduce a proximal version of the stochastic dual coordinate ascent method and show how to accelerate the method using an inner-outer iteration procedure. We analyze the runtime of the framework and obtain rates that improve state-of-the-art results for various key machine learning optimization problems including SVM, logistic regression, ridge regression, Lasso, and multiclass SVM. Experiments validate our theoretical findings.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset