BEV-SGD: Best Effort Voting SGD for Analog Aggregation Based Federated Learning against Byzantine Attackers

10/18/2021
by   Xin Fan, et al.
0

As a promising distributed learning technology, analog aggregation based federated learning over the air (FLOA) provides high communication efficiency and privacy provisioning in edge computing paradigm. When all edge devices (workers) simultaneously upload their local updates to the parameter server (PS) through the commonly shared time-frequency resources, the PS can only obtain the averaged update rather than the individual local ones. As a result, such a concurrent transmission and aggregation scheme reduces the latency and costs of communication but makes FLOA vulnerable to Byzantine attacks which then degrade FLOA performance. For the design of Byzantine-resilient FLOA, this paper starts from analyzing the channel inversion (CI) power control mechanism that is widely used in existing FLOA literature. Our theoretical analysis indicates that although CI can achieve good learning performance in the non-attacking scenarios, it fails to work well with limited defensive capability to Byzantine attacks. Then, we propose a novel defending scheme called best effort voting (BEV) power control policy integrated with stochastic gradient descent (SGD). Our BEV-SGD improves the robustness of FLOA to Byzantine attacks, by allowing all the workers to send their local updates at their maximum transmit power. Under the strongest-attacking circumstance, we derive the expected convergence rates of FLOA with CI and BEV power control policies, respectively. The rate comparison reveals that our BEV-SGD outperforms its counterpart with CI in terms of better convergence behavior, which is verified by experimental simulations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/11/2020

Holdout SGD: Byzantine Tolerant Federated Learning

This work presents a new distributed Byzantine tolerant federated learni...
research
07/16/2022

MixTailor: Mixed Gradient Aggregation for Robust Learning Against Tailored Attacks

Implementations of SGD on distributed and multi-GPU systems creates new ...
research
05/23/2018

Phocas: dimensional Byzantine-resilient stochastic gradient descent

We propose a novel robust aggregation rule for distributed synchronous S...
research
04/14/2021

BROADCAST: Reducing Both Stochastic and Compression Noise to Robustify Communication-Efficient Federated Learning

Communication between workers and the master node to collect local stoch...
research
06/17/2020

Communication-Efficient Robust Federated Learning Over Heterogeneous Datasets

This work investigates fault-resilient federated learning when the data ...
research
03/02/2020

BASGD: Buffered Asynchronous SGD for Byzantine Learning

Distributed learning has become a hot research topic, due to its wide ap...
research
02/28/2020

Distributed Momentum for Byzantine-resilient Learning

Momentum is a variant of gradient descent that has been proposed for its...

Please sign up or login with your details

Forgot password? Click here to reset