Byzantine-Robust Decentralized Stochastic Optimization with Stochastic Gradient Noise-Independent Learning Error

08/10/2023
by   Jie Peng, et al.
0

This paper studies Byzantine-robust stochastic optimization over a decentralized network, where every agent periodically communicates with its neighbors to exchange local models, and then updates its own local model by stochastic gradient descent (SGD). The performance of such a method is affected by an unknown number of Byzantine agents, which conduct adversarially during the optimization process. To the best of our knowledge, there is no existing work that simultaneously achieves a linear convergence speed and a small learning error. We observe that the learning error is largely dependent on the intrinsic stochastic gradient noise. Motivated by this observation, we introduce two variance reduction methods, stochastic average gradient algorithm (SAGA) and loopless stochastic variance-reduced gradient (LSVRG), to Byzantine-robust decentralized stochastic optimization for eliminating the negative effect of the stochastic gradient noise. The two resulting methods, BRAVO-SAGA and BRAVO-LSVRG, enjoy both linear convergence speeds and stochastic gradient noise-independent learning errors. Such learning errors are optimal for a class of methods based on total variation (TV)-norm regularization and stochastic subgradient update. We conduct extensive numerical experiments to demonstrate their effectiveness under various Byzantine attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/09/2022

Byzantine-Resilient Decentralized Stochastic Optimization with Robust Aggregation Rules

This work focuses on decentralized stochastic optimization in the presen...
research
05/12/2020

Byzantine-Robust Decentralized Stochastic Optimization over Static and Time-Varying Networks

In this paper, we consider the Byzantine-robust stochastic optimization ...
research
03/23/2018

Byzantine Stochastic Gradient Descent

This paper studies the problem of distributed stochastic optimization in...
research
08/28/2023

On the Tradeoff between Privacy Preservation and Byzantine-Robustness in Decentralized Learning

This paper jointly considers privacy preservation and Byzantine-robustne...
research
09/30/2022

Online Multi-Agent Decentralized Byzantine-robust Gradient Estimation

In this paper, we propose an iterative scheme for distributed Byzantiner...
research
04/20/2017

Performance Limits of Stochastic Sub-Gradient Learning, Part II: Multi-Agent Case

The analysis in Part I revealed interesting properties for subgradient l...
research
09/17/2020

Byzantine-Robust Variance-Reduced Federated Learning over Distributed Non-i.i.d. Data

We propose a Byzantine-robust variance-reduced stochastic gradient desce...

Please sign up or login with your details

Forgot password? Click here to reset