On the Interaction Between Differential Privacy and Gradient Compression in Deep Learning

by   Jimmy Lin, et al.

While differential privacy and gradient compression are separately well-researched topics in machine learning, the study of interaction between these two topics is still relatively new. We perform a detailed empirical study on how the Gaussian mechanism for differential privacy and gradient compression jointly impact test accuracy in deep learning. The existing literature in gradient compression mostly evaluates compression in the absence of differential privacy guarantees, and demonstrate that sufficiently high compression rates reduce accuracy. Similarly, existing literature in differential privacy evaluates privacy mechanisms in the absence of compression, and demonstrates that sufficiently strong privacy guarantees reduce accuracy. In this work, we observe while gradient compression generally has a negative impact on test accuracy in non-private training, it can sometimes improve test accuracy in differentially private training. Specifically, we observe that when employing aggressive sparsification or rank reduction to the gradients, test accuracy is less affected by the Gaussian noise added for differential privacy. These observations are explained through an analysis how differential privacy and compression effects the bias and variance in estimating the average gradient. We follow this study with a recommendation on how to improve test accuracy under the context of differentially private deep learning and gradient compression. We evaluate this proposal and find that it can reduce the negative impact of noise added by differential privacy mechanisms on test accuracy by up to 24.6 negative impact of gradient sparsification on test accuracy by up to 15.1


page 19

page 21

page 26

page 27

page 30

page 33

page 35

page 36


Removing Disparate Impact of Differentially Private Stochastic Gradient Descent on Model Accuracy

When we enforce differential privacy in machine learning, the utility-pr...

Privacy for Free: Communication-Efficient Learning with Differential Privacy Using Sketches

Communication and privacy are two critical concerns in distributed learn...

Neither Private Nor Fair: Impact of Data Imbalance on Utility and Fairness in Differential Privacy

Deployment of deep learning in different fields and industries is growin...

Gradient Sparsification Can Improve Performance of Differentially-Private Convex Machine Learning

We use gradient sparsification to reduce the adverse effect of different...

Large Scale Private Learning via Low-rank Reparametrization

We propose a reparametrization scheme to address the challenges of apply...

Scaling up Differentially Private Deep Learning with Fast Per-Example Gradient Clipping

Recent work on Renyi Differential Privacy has shown the feasibility of a...

Efficient Per-Example Gradient Computations in Convolutional Neural Networks

Deep learning frameworks leverage GPUs to perform massively-parallel com...

Please sign up or login with your details

Forgot password? Click here to reset