Quantification of the Leakage in Federated Learning

10/12/2019
by   Zhaorui Li, et al.
0

With the growing emphasis on users' privacy, federated learning has become more and more popular. Many architectures have been raised for a better security. Most architecture work on the assumption that data's gradient could not leak information. However, some work, recently, has shown such gradients may lead to leakage of the training data. In this paper, we discuss the leakage based on a federated approximated logistic regression model and show that such gradient's leakage could leak the complete training data if all elements of the inputs are either 0 or 1.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/22/2019

Parallel Distributed Logistic Regression for Vertical Federated Learning without Third-Party Coordinator

Federated Learning is a new distributed learning mechanism which allows ...
research
11/18/2020

Privacy Leakage of Real-World Vertical Federated Learning

Federated learning enables mutually distrusting participants to collabor...
research
02/24/2021

A Quantitative Metric for Privacy Leakage in Federated Learning

In the federated learning system, parameter gradients are shared among p...
research
11/19/2021

Understanding Training-Data Leakage from Gradients in Neural Networks for Image Classification

Federated learning of deep learning models for supervised tasks, e.g. im...
research
11/08/2021

Bayesian Framework for Gradient Leakage

Federated learning is an established method for training machine learnin...
research
02/21/2023

Speech Privacy Leakage from Shared Gradients in Distributed Learning

Distributed machine learning paradigms, such as federated learning, have...
research
04/25/2022

Analysing the Influence of Attack Configurations on the Reconstruction of Medical Images in Federated Learning

The idea of federated learning is to train deep neural network models co...

Please sign up or login with your details

Forgot password? Click here to reset