Per-Instance Privacy Accounting for Differentially Private Stochastic Gradient Descent

06/06/2022
by   Da Yu, et al.
12

Differentially private stochastic gradient descent (DP-SGD) is the workhorse algorithm for recent advances in private deep learning. It provides a single privacy guarantee to all datapoints in the dataset. We propose an efficient algorithm to compute per-instance privacy guarantees for individual examples when running DP-SGD. We use our algorithm to investigate per-instance privacy losses across a number of datasets. We find that most examples enjoy stronger privacy guarantees than the worst-case bounds. We further discover that the loss and the privacy loss on an example are well-correlated. This implies groups that are underserved in terms of model utility are simultaneously underserved in terms of privacy loss. For example, on CIFAR-10, the average ϵ of the class with the highest loss (Cat) is 32 the class with the lowest loss (Ship). We also run membership inference attacks to show this reflects disparate empirical privacy risks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/07/2022

Differentially Private Deep Learning with ModelMix

Training large neural networks with meaningful/usable differential priva...
research
07/01/2023

Gradients Look Alike: Sensitivity is Often Overestimated in DP-SGD

Differentially private stochastic gradient descent (DP-SGD) is the canon...
research
06/14/2023

Augment then Smooth: Reconciling Differential Privacy with Certified Robustness

Machine learning models are susceptible to a variety of attacks that can...
research
07/14/2021

An Efficient DP-SGD Mechanism for Large Scale NLP Models

Recent advances in deep learning have drastically improved performance o...
research
10/15/2019

DP-MAC: The Differentially Private Method of Auxiliary Coordinates for Deep Learning

Developing a differentially private deep learning algorithm is challengi...
research
02/26/2020

On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping

Machine learning algorithms are vulnerable to data poisoning attacks. Pr...
research
06/08/2021

PEARL: Data Synthesis via Private Embeddings and Adversarial Reconstruction Learning

We propose a new framework of synthesizing data using deep generative mo...

Please sign up or login with your details

Forgot password? Click here to reset