Concentrated Differentially Private Gradient Descent with Adaptive per-Iteration Privacy Budget

08/28/2018
by   Jaewoo Lee, et al.
0

Iterative algorithms, like gradient descent, are common tools for solving a variety of problems, such as model fitting. For this reason, there is interest in creating differentially private versions of them. However, their conversion to differentially private algorithms is often naive. For instance, a fixed number of iterations are chosen, the privacy budget is split evenly among them, and at each iteration, parameters are updated with a noisy gradient. In this paper, we show that gradient-based algorithms can be improved by a more careful allocation of privacy budget per iteration. Intuitively, at the beginning of the optimization, gradients are expected to be large, so that they do not need to be measured as accurately. However, as the parameters approach their optimal values, the gradients decrease and hence need to be measured more accurately. We add a basic line-search capability that helps the algorithm decide when more accurate gradient measurements are necessary. Our gradient descent algorithm works with the recently introduced zCDP version of differential privacy. It outperforms prior algorithms for model fitting and is competitive with the state-of-the-art for (ϵ,δ)-differential privacy, a strictly weaker definition than zCDP.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/14/2022

SA-DPSGD: Differentially Private Stochastic Gradient Descent based on Simulated Annealing

Differential privacy (DP) provides a formal privacy guarantee that preve...
research
02/11/2021

Differential Privacy Dynamics of Langevin Diffusion and Noisy Gradient Descent

We model the dynamics of privacy loss in Langevin diffusion and extend i...
research
01/18/2021

On the Differentially Private Nature of Perturbed Gradient Descent

We consider the problem of empirical risk minimization given a database,...
research
07/19/2023

The importance of feature preprocessing for differentially private linear optimization

Training machine learning models with differential privacy (DP) has rece...
research
02/25/2021

Do Not Let Privacy Overbill Utility: Gradient Embedding Perturbation for Private Learning

The privacy leakage of the model about the training data can be bounded ...
research
06/24/2019

The Value of Collaboration in Convex Machine Learning with Differential Privacy

In this paper, we apply machine learning to distributed private data own...
research
01/19/2021

On Dynamic Noise Influence in Differentially Private Learning

Protecting privacy in learning while maintaining the model performance h...

Please sign up or login with your details

Forgot password? Click here to reset