Doubly Accelerated Stochastic Variance Reduced Dual Averaging Method for Regularized Empirical Risk Minimization

03/01/2017
by   Tomoya Murata, et al.
0

In this paper, we develop a new accelerated stochastic gradient method for efficiently solving the convex regularized empirical risk minimization problem in mini-batch settings. The use of mini-batches is becoming a golden standard in the machine learning community, because mini-batch settings stabilize the gradient estimate and can easily make good use of parallel computing. The core of our proposed method is the incorporation of our new "double acceleration" technique and variance reduction technique. We theoretically analyze our proposed method and show that our method much improves the mini-batch efficiencies of previous accelerated stochastic methods, and essentially only needs size √(n) mini-batches for achieving the optimal iteration complexities for both non-strongly and strongly convex objectives, where n is the training set size. Further, we show that even in non-mini-batch settings, our method achieves the best known convergence rate for both non-strongly and strongly convex objectives.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/07/2018

ASVRG: Accelerated Proximal SVRG

This paper proposes an accelerated proximal stochastic variance reduced ...
research
05/12/2013

Accelerated Mini-Batch Stochastic Dual Coordinate Ascent

Stochastic dual coordinate ascent (SDCA) is an effective technique for s...
research
11/25/2018

Inexact SARAH Algorithm for Stochastic Optimization

We develop and analyze a variant of variance reducing stochastic gradien...
research
03/08/2016

Stochastic dual averaging methods using variance reduction techniques for regularized empirical risk minimization problems

We consider a composite convex minimization problem associated with regu...
research
02/10/2023

Achieving acceleration despite very noisy gradients

We present a novel momentum-based first order optimization method (AGNES...
research
04/23/2023

Accelerated Doubly Stochastic Gradient Algorithm for Large-scale Empirical Risk Minimization

Nowadays, algorithms with fast convergence, small memory footprints, and...
research
03/09/2020

Amortized variance reduction for doubly stochastic objectives

Approximate inference in complex probabilistic models such as deep Gauss...

Please sign up or login with your details

Forgot password? Click here to reset