LEGATO: A LayerwisE Gradient AggregaTiOn Algorithm for Mitigating Byzantine Attacks in Federated Learning

07/26/2021
by   Kamala Varma, et al.
12

Federated learning has arisen as a mechanism to allow multiple participants to collaboratively train a model without sharing their data. In these settings, participants (workers) may not trust each other fully; for instance, a set of competitors may collaboratively train a machine learning model to detect fraud. The workers provide local gradients that a central server uses to update a global model. This global model can be corrupted when Byzantine workers send malicious gradients, which necessitates robust methods for aggregating gradients that mitigate the adverse effects of Byzantine inputs. Existing robust aggregation algorithms are often computationally expensive and only effective under strict assumptions. In this paper, we introduce LayerwisE Gradient AggregatTiOn (LEGATO), an aggregation algorithm that is, by contrast, scalable and generalizable. Informed by a study of layer-specific responses of gradients to Byzantine attacks, LEGATO employs a dynamic gradient reweighing scheme that is novel in its treatment of gradients based on layer-specific robustness. We show that LEGATO is more computationally efficient than multiple state-of-the-art techniques and more generally robust across a variety of attack settings in practice. We also demonstrate LEGATO's benefits for gradient descent convergence in the absence of an attack.

READ FULL TEXT
research
02/14/2023

An Experimental Study of Byzantine-Robust Aggregation Schemes in Federated Learning

Byzantine-robust federated learning aims at mitigating Byzantine failure...
research
04/14/2021

BROADCAST: Reducing Both Stochastic and Compression Noise to Robustify Communication-Efficient Federated Learning

Communication between workers and the master node to collect local stoch...
research
02/17/2022

An Equivalence Between Data Poisoning and Byzantine Gradient Attacks

To study the resilience of distributed learning, the "Byzantine" literat...
research
09/11/2019

Byzantine-Robust Federated Machine Learning through Adaptive Model Averaging

Federated learning enables training collaborative machine learning model...
research
06/16/2020

Byzantine-Robust Learning on Heterogeneous Datasets via Resampling

In Byzantine robust distributed optimization, a central server wants to ...
research
10/10/2020

ByzShield: An Efficient and Robust System for Distributed Training

Training of large scale models on distributed clusters is a critical com...
research
02/03/2022

Byzantine-Robust Decentralized Learning via Self-Centered Clipping

In this paper, we study the challenging task of Byzantine-robust decentr...

Please sign up or login with your details

Forgot password? Click here to reset