PRECODE - A Generic Model Extension to Prevent Deep Gradient Leakage

08/10/2021
by   Daniel Scheliga, et al.
4

Collaborative training of neural networks leverages distributed data by exchanging gradient information between different clients. Although training data entirely resides with the clients, recent work shows that training data can be reconstructed from such exchanged gradient information. To enhance privacy, gradient perturbation techniques have been proposed. However, they come at the cost of reduced model performance, increased convergence time, or increased data demand. In this paper, we introduce PRECODE, a PRivacy EnhanCing mODulE that can be used as generic extension for arbitrary model architectures. We propose a simple yet effective realization of PRECODE using variational modeling. The stochastic sampling induced by variational modeling effectively prevents privacy leakage from gradients and in turn preserves privacy of data owners. We evaluate PRECODE using state of the art gradient inversion attacks on two different model architectures trained on three datasets. In contrast to commonly used defense mechanisms, we find that our proposed modification consistently reduces the attack success rate to 0 negative impact on model training and final performance. As a result, PRECODE reveals a promising path towards privacy enhancing model extensions.

READ FULL TEXT

page 2

page 6

page 13

page 14

page 15

page 16

research
08/09/2022

Combining Variational Modeling with Partial Gradient Perturbation to Prevent Deep Gradient Leakage

Exploiting gradient leakage to reconstruct supposedly private training d...
research
08/12/2022

Dropout is NOT All You Need to Prevent Gradient Leakage

Gradient inversion attacks on federated learning systems reconstruct cli...
research
03/29/2022

Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage

Federated Learning (FL) framework brings privacy benefits to distributed...
research
05/19/2021

User Label Leakage from Gradients in Federated Learning

Federated learning enables multiple users to build a joint model by shar...
research
09/14/2020

SAPAG: A Self-Adaptive Privacy Attack From Gradients

Distributed learning such as federated learning or collaborative learnin...
research
06/21/2019

Deep Leakage from Gradients

Exchanging gradients is a widely used method in modern multi-node machin...
research
06/16/2023

Just One Byte (per gradient): A Note on Low-Bandwidth Decentralized Language Model Finetuning Using Shared Randomness

Language model training in distributed settings is limited by the commun...

Please sign up or login with your details

Forgot password? Click here to reset