Information Stealing in Federated Learning Systems Based on Generative Adversarial Networks

08/02/2021
by   Yuwei Sun, et al.
0

An attack on deep learning systems where intelligent machines collaborate to solve problems could cause a node in the network to make a mistake on a critical judgment. At the same time, the security and privacy concerns of AI have galvanized the attention of experts from multiple disciplines. In this research, we successfully mounted adversarial attacks on a federated learning (FL) environment using three different datasets. The attacks leveraged generative adversarial networks (GANs) to affect the learning process and strive to reconstruct the private data of users by learning hidden features from shared local model parameters. The attack was target-oriented drawing data with distinct class distribution from the CIFAR- 10, MNIST, and Fashion-MNIST respectively. Moreover, by measuring the Euclidean distance between the real data and the reconstructed adversarial samples, we evaluated the performance of the adversary in the learning processes in various scenarios. At last, we successfully reconstructed the real data of the victim from the shared global model parameters with all the applied datasets.

READ FULL TEXT

page 1

page 6

research
10/13/2022

Federated Learning for Tabular Data: Exploring Potential Risk to Privacy

Federated Learning (FL) has emerged as a potentially powerful privacy-pr...
research
06/13/2023

Privacy Inference-Empowered Stealthy Backdoor Attack on Federated Learning under Non-IID Scenarios

Federated learning (FL) naturally faces the problem of data heterogeneit...
research
02/22/2023

Personalized Privacy-Preserving Framework for Cross-Silo Federated Learning

Federated learning (FL) is recently surging as a promising decentralized...
research
07/25/2023

Mitigating Cross-client GANs-based Attack in Federated Learning

Machine learning makes multimedia data (e.g., images) more attractive, h...
research
10/19/2022

Backdoor Attack and Defense in Federated Generative Adversarial Network-based Medical Image Synthesis

Deep Learning-based image synthesis techniques have been applied in heal...
research
10/09/2019

Membership Model Inversion Attacks for Deep Networks

With the increasing adoption of AI, inherent security and privacy vulner...
research
03/22/2022

Semi-Targeted Model Poisoning Attack on Federated Learning via Backward Error Analysis

Model poisoning attacks on federated learning (FL) intrude in the entire...

Please sign up or login with your details

Forgot password? Click here to reset