Byzantine-Robust Learning on Heterogeneous Datasets via Resampling

06/16/2020
by   Lie He, et al.
0

In Byzantine robust distributed optimization, a central server wants to train a machine learning model over data distributed across multiple workers. However, a fraction of these workers may deviate from the prescribed algorithm and send arbitrary messages to the server. While this problem has received significant attention recently, most current defenses assume that the workers have identical data. For realistic cases when the data across workers is heterogeneous (non-iid), we design new attacks which circumvent these defenses leading to significant loss of performance. We then propose a simple resampling scheme that adapts existing robust algorithms to heterogeneous datasets at a negligible computational cost. We theoretically and experimentally validate our approach, showing that combining resampling with existing robust algorithms is effective against challenging attacks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/09/2018

RSA: Byzantine-Robust Stochastic Aggregation Methods for Distributed Learning from Heterogeneous Datasets

In this paper, we propose a class of robust stochastic subgradient metho...
research
10/15/2022

Linear Scalarization for Byzantine-robust learning on non-IID data

In this work we study the problem of Byzantine-robust learning when data...
research
01/16/2023

A Robust Classification Framework for Byzantine-Resilient Stochastic Gradient Descent

This paper proposes a Robust Gradient Classification Framework (RGCF) fo...
research
07/26/2021

LEGATO: A LayerwisE Gradient AggregaTiOn Algorithm for Mitigating Byzantine Attacks in Federated Learning

Federated learning has arisen as a mechanism to allow multiple participa...
research
06/13/2021

Stochastic Alternating Direction Method of Multipliers for Byzantine-Robust Distributed Learning

This paper aims to solve a distributed learning problem under Byzantine ...
research
10/10/2020

ByzShield: An Efficient and Robust System for Distributed Training

Training of large scale models on distributed clusters is a critical com...
research
09/20/2021

ApproxIFER: A Model-Agnostic Approach to Resilient and Robust Prediction Serving Systems

Due to the surge of cloud-assisted AI services, the problem of designing...

Please sign up or login with your details

Forgot password? Click here to reset