Towards General Deep Leakage in Federated Learning

10/18/2021
by   Jiahui Geng, et al.
0

Unlike traditional central training, federated learning (FL) improves the performance of the global model by sharing and aggregating local models rather than local data to protect the users' privacy. Although this training approach appears secure, some research has demonstrated that an attacker can still recover private data based on the shared gradient information. This on-the-fly reconstruction attack deserves to be studied in depth because it can occur at any stage of training, whether at the beginning or at the end of model training; no relevant dataset is required and no additional models need to be trained. We break through some unrealistic assumptions and limitations to apply this reconstruction attack in a broader range of scenarios. We propose methods that can reconstruct the training data from shared gradients or weights, corresponding to the FedSGD and FedAvg usage scenarios, respectively. We propose a zero-shot approach to restore labels even if there are duplicate labels in the batch. We study the relationship between the label and image restoration. We find that image restoration fails even if there is only one incorrectly inferred label in the batch; we also find that when batch images have the same label, the corresponding image is restored as a fusion of that class of images. Our approaches are evaluated on classic image benchmarks, including CIFAR-10 and ImageNet. The batch size, image quality, and the adaptability of the label distribution of our approach exceed those of GradInversion, the state-of-the-art.

READ FULL TEXT

page 2

page 6

page 7

research
05/19/2021

User Label Leakage from Gradients in Federated Learning

Federated learning enables multiple users to build a joint model by shar...
research
10/26/2021

CAFE: Catastrophic Data Leakage in Vertical Federated Learning

Recent studies show that private training data can be leaked through the...
research
08/13/2023

Approximate and Weighted Data Reconstruction Attack in Federated Learning

Federated Learning (FL) is a distributed learning paradigm that enables ...
research
07/02/2018

How To Backdoor Federated Learning

Federated learning enables multiple participants to jointly construct a ...
research
12/07/2021

Location Leakage in Federated Signal Maps

We consider the problem of predicting cellular network performance (sign...
research
12/06/2021

When the Curious Abandon Honesty: Federated Learning Is Not Private

In federated learning (FL), data does not leave personal devices when th...
research
04/25/2022

Analysing the Influence of Attack Configurations on the Reconstruction of Medical Images in Federated Learning

The idea of federated learning is to train deep neural network models co...

Please sign up or login with your details

Forgot password? Click here to reset