Empirical Risk Minimization with Shuffled SGD: A Primal-Dual Perspective and Improved Bounds

06/21/2023
by   Xufeng Cai, et al.
0

Stochastic gradient descent (SGD) is perhaps the most prevalent optimization method in modern machine learning. Contrary to the empirical practice of sampling from the datasets without replacement and with (possible) reshuffling at each epoch, the theoretical counterpart of SGD usually relies on the assumption of sampling with replacement. It is only very recently that SGD with sampling without replacement – shuffled SGD – has been analyzed. For convex finite sum problems with n components and under the L-smoothness assumption for each component function, there are matching upper and lower bounds, under sufficiently small – 𝒪(1/nL) – step sizes. Yet those bounds appear too pessimistic – in fact, the predicted performance is generally no better than for full gradient descent – and do not agree with the empirical observations. In this work, to narrow the gap between the theory and practice of shuffled SGD, we sharpen the focus from general finite sum problems to empirical risk minimization with linear predictors. This allows us to take a primal-dual perspective and interpret shuffled SGD as a primal-dual method with cyclic coordinate updates on the dual side. Leveraging this perspective, we prove a fine-grained complexity bound that depends on the data matrix and is never worse than what is predicted by the existing bounds. Notably, our bound can predict much faster convergence than the existing analyses – by a factor of the order of √(n) in some cases. We empirically demonstrate that on common machine learning datasets our bound is indeed much tighter. We further show how to extend our analysis to convex nonsmooth problems, with similar improvements.

READ FULL TEXT
research
02/27/2022

Benign Underfitting of Stochastic Gradient Descent

We study to what extent may stochastic gradient descent (SGD) be underst...
research
02/24/2020

Closing the convergence gap of SGD without replacement

Stochastic gradient descent without replacement sampling is widely used ...
research
03/13/2023

Tighter Lower Bounds for Shuffling SGD: Random Permutations and Beyond

We study convergence lower bounds of without-replacement stochastic grad...
research
07/31/2019

How Good is SGD with Random Shuffling?

We study the performance of stochastic gradient descent (SGD) on smooth ...
research
07/10/2016

On Faster Convergence of Cyclic Block Coordinate Descent-type Methods for Strongly Convex Minimization

The cyclic block coordinate descent-type (CBCD-type) methods, which perf...
research
02/03/2022

Characterizing Finding Good Data Orderings for Fast Convergence of Sequential Gradient Methods

While SGD, which samples from the data with replacement is widely studie...
research
01/08/2020

SGD with Hardness Weighted Sampling for Distributionally Robust Deep Learning

Distributionally Robust Optimization (DRO) has been proposed as an alter...

Please sign up or login with your details

Forgot password? Click here to reset