EIFFeL: Ensuring Integrity for Federated Learning

12/23/2021
by   Amrita Roy Chowdhury, et al.
0

Federated learning (FL) enables clients to collaborate with a server to train a machine learning model. To ensure privacy, the server performs secure aggregation of updates from the clients. Unfortunately, this prevents verification of the well-formedness (integrity) of the updates as the updates are masked. Consequently, malformed updates designed to poison the model can be injected without detection. In this paper, we formalize the problem of ensuring both update privacy and integrity in FL and present a new system, , that enables secure aggregation of verified updates. is a general framework that can enforce arbitrary integrity checks and remove malformed updates from the aggregate, without violating privacy. Our empirical evaluation demonstrates the practicality of . For instance, with 100 clients and 10% poisoning, can train an MNIST classification model to the same accuracy as that of a non-poisoned federated learner in just 2.4 seconds per iteration.

READ FULL TEXT

page 1

page 5

research
08/16/2022

FedPerm: Private and Robust Federated Learning by Parameter Permutation

Federated Learning (FL) is a distributed learning paradigm that enables ...
research
10/02/2020

F2ED-Learning: Good Fences Make Good Neighbors

In this paper, we present F2ED-Learning, the first federated learning pr...
research
04/26/2023

Blockchain-based Federated Learning with SMPC Model Verification Against Poisoning Attack for Healthcare Systems

Due to the rising awareness of privacy and security in machine learning ...
research
02/24/2023

Regulating Clients' Noise Adding in Federated Learning without Verification

In federated learning (FL), clients cooperatively train a global model w...
research
02/04/2022

Aggregation Service for Federated Learning: An Efficient, Secure, and More Resilient Realization

Federated learning has recently emerged as a paradigm promising the bene...
research
03/27/2023

The Resource Problem of Using Linear Layer Leakage Attack in Federated Learning

Secure aggregation promises a heightened level of privacy in federated l...
research
02/24/2023

Subspace based Federated Unlearning

Federated learning (FL) enables multiple clients to train a machine lear...

Please sign up or login with your details

Forgot password? Click here to reset