Data Encoding for Byzantine-Resilient Distributed Optimization

07/05/2019
by   Deepesh Data, et al.
0

We study distributed optimization in the presence of Byzantine adversaries, where both data and computation are distributed among m worker machines, t of which can be corrupt and collaboratively deviate arbitrarily from their pre-specified programs, and a designated (master) node iteratively computes the model/parameter vector for generalized linear models. In this work, we primarily focus on two iterative algorithms: Proximal Gradient Descent (PGD) and Coordinate Descent (CD). Gradient descent (GD) is a special case of these algorithms. PGD is typically used in the data-parallel setting, where data is partitioned across different samples, whereas, CD is used in the model-parallelism setting, where the data is partitioned across the parameter space. In this paper, we propose a method based on data encoding and error correction over real numbers to combat adversarial attacks. We can tolerate up to t≤m-1/2 corrupt worker nodes, which is information-theoretically optimal. We give deterministic guarantees, and our method does not assume any probability distribution on the data. We develop a sparse encoding scheme which enables computationally efficient data encoding and decoding. We demonstrate a trade-off between corruption threshold and the resource requirement (storage and computational/communication complexity). As an example, for t≤m/3, our scheme incurs only a constant overhead on these resources, over that required by the plain distributed PGD/CD algorithms which provide no adversarial protection. Our encoding scheme extends efficiently to (i) the data streaming model, where data samples come in an online fashion and are encoded as they arrive, and (ii) making stochastic gradient descent (SGD) Byzantine-resilient. In the end, we give experimental results to show the efficacy of our method.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/23/2018

Byzantine Stochastic Gradient Descent

This paper studies the problem of distributed stochastic optimization in...
research
05/22/2018

Robust Gradient Descent via Moment Encoding with LDPC Codes

This paper considers the problem of implementing large-scale gradient de...
research
08/25/2022

A simplified convergence theory for Byzantine resilient stochastic gradient descent

In distributed learning, a central server trains a model according to up...
research
05/16/2017

Distributed Statistical Machine Learning in Adversarial Settings: Byzantine Gradient Descent

We consider the problem of distributed statistical machine learning in a...
research
06/24/2020

Befriending The Byzantines Through Reputation Scores

We propose two novel stochastic gradient descent algorithms, ByGARS and ...
research
03/14/2018

Redundancy Techniques for Straggler Mitigation in Distributed Optimization and Learning

Performance of distributed optimization and learning systems is bottlene...
research
08/21/2019

BRIDGE: Byzantine-resilient Decentralized Gradient Descent

Decentralized optimization techniques are increasingly being used to lea...

Please sign up or login with your details

Forgot password? Click here to reset