Inverting Gradients – How easy is it to break privacy in federated learning?

03/31/2020
by   Jonas Geiping, et al.
0

The idea of federated learning is to collaboratively train a neural network on a server. Each user receives the current weights of the network and in turns sends parameter updates (gradients) based on local data. This protocol has been designed not only to train neural networks data-efficiently, but also to provide privacy benefits for users, as their input data remains on device and only parameter gradients are shared. In this paper we show that sharing parameter gradients is by no means secure: By exploiting a cosine similarity loss along with optimization methods from adversarial attacks, we are able to faithfully reconstruct images at high resolution from the knowledge of their parameter gradients, and demonstrate that such a break of privacy is possible even for trained deep networks. Moreover, we analyze the effects of architecture as well as parameters on the difficulty of reconstructing the input image, prove that any input to a fully connected layer can be reconstructed analytically independent of the remaining architecture, and show numerically that even averaging gradients over several iterations or several images does not protect the user's privacy in federated learning applications in computer vision.

READ FULL TEXT

page 2

page 5

page 12

page 14

page 21

page 24

page 25

page 26

research
06/10/2021

Gradient Disaggregation: Breaking Privacy in Federated Learning by Reconstructing the User Participant Matrix

We show that aggregated model updates in federated learning may be insec...
research
04/15/2021

See through Gradients: Image Batch Recovery via GradInversion

Training deep neural networks requires gradient estimation from data bat...
research
06/08/2020

Attacks to Federated Learning: Responsive Web User Interface to Recover Training Data from User Gradients

Local differential privacy (LDP) is an emerging privacy standard to prot...
research
04/25/2022

Analysing the Influence of Attack Configurations on the Reconstruction of Medical Images in Federated Learning

The idea of federated learning is to train deep neural network models co...
research
06/08/2020

Responsive Web User Interface to Recover Training Data from User Gradients in Federated Learning

Local differential privacy (LDP) is an emerging privacy standard to prot...
research
06/12/2020

Backdoor Attacks on Federated Meta-Learning

Federated learning allows multiple users to collaboratively train a shar...
research
10/15/2020

R-GAP: Recursive Gradient Attack on Privacy

Federated learning frameworks have been regarded as a promising approach...

Please sign up or login with your details

Forgot password? Click here to reset