The Error-Feedback Framework: Better Rates for SGD with Delayed Gradients and Compressed Communication

09/11/2019
by   Sebastian U. Stich, et al.
0

We analyze (stochastic) gradient descent (SGD) with delayed updates on smooth quasi-convex and non-convex functions and derive concise, non-asymptotic, convergence rates. We show that the rate of convergence in all cases consists of two terms: (i) a stochastic term which is not affected by the delay, and (ii) a higher order deterministic term which is only linearly slowed down by the delay. Thus, in the presence of noise, the effects of the delay become negligible after a few iterations and the algorithm converges at the same optimal rate as standard SGD. This result extends a line of research that showed similar results in the asymptotic regime or for strongly-convex quadratic functions only. We further show similar results for SGD with more intricate form of delayed gradients—compressed gradients under error compensation and for localSGD where multiple workers perform local steps before communicating with each other. In all of these settings, we improve upon the best known rates. These results show that SGD is robust to compressed and/or delayed stochastic gradient updates. This is in particular important for distributed parallel implementations, where asynchronous and communication efficient methods are the key to achieve linear speedups for optimization with multiple devices.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/16/2022

Sharper Convergence Guarantees for Asynchronous SGD for Distributed and Federated Learning

We study the asynchronous stochastic gradient descent algorithm for dist...
research
05/21/2021

Escaping Saddle Points with Compressed SGD

Stochastic gradient descent (SGD) is a prevalent optimization technique ...
research
10/23/2020

Linearly Converging Error Compensated SGD

In this paper, we propose a unified analysis of variants of distributed ...
research
02/12/2021

Stability and Convergence of Stochastic Gradient Clipping: Beyond Lipschitz Continuity and Smoothness

Stochastic gradient algorithms are often unstable when applied to functi...
research
01/08/2018

How To Make the Gradients Small Stochastically

In convex stochastic optimization, convergence rates in terms of minimiz...
research
07/26/2020

CSER: Communication-efficient SGD with Error Reset

The scalability of Distributed Stochastic Gradient Descent (SGD) is toda...
research
06/22/2021

Asynchronous Stochastic Optimization Robust to Arbitrary Delays

We consider stochastic optimization with delayed gradients where, at eac...

Please sign up or login with your details

Forgot password? Click here to reset