Differentially Private Image Classification from Features

11/24/2022
by   Harsh Mehta, et al.
0

Leveraging transfer learning has recently been shown to be an effective strategy for training large models with Differential Privacy (DP). Moreover, somewhat surprisingly, recent works have found that privately training just the last layer of a pre-trained model provides the best utility with DP. While past studies largely rely on algorithms like DP-SGD for training large models, in the specific case of privately learning from features, we observe that computational burden is low enough to allow for more sophisticated optimization schemes, including second-order methods. To that end, we systematically explore the effect of design parameters such as loss function and optimization algorithm. We find that, while commonly used logistic regression performs better than linear regression in the non-private setting, the situation is reversed in the private setting. We find that linear regression is much more effective than logistic regression from both privacy and computational aspects, especially at stricter epsilon values (ϵ < 1). On the optimization side, we also explore using Newton's method, and find that second-order information is quite helpful even with privacy, although the benefit significantly diminishes with stricter privacy guarantees. While both methods use second-order information, least squares is effective at lower epsilons while Newton's method is effective at larger epsilon values. To combine the benefits of both, we propose a novel algorithm called DP-FC, which leverages feature covariance instead of the Hessian of the logistic regression loss and performs well across all ϵ values we tried. With this, we obtain new SOTA results on ImageNet-1k, CIFAR-100 and CIFAR-10 across all values of ϵ typically considered. Most remarkably, on ImageNet-1K, we obtain top-1 accuracy of 88% under (8, 8 * 10^-7)-DP and 84.3% under (0.1, 8 * 10^-7)-DP.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/22/2023

Faster Differentially Private Convex Optimization via Second-Order Methods

Differentially private (stochastic) gradient descent is the workhorse of...
research
07/25/2023

Accuracy Amplification in Differentially Private Logistic Regression: A Pre-Training Approach

Machine learning (ML) models can memorize training datasets. As a result...
research
06/27/2023

Differentially Private Video Activity Recognition

In recent years, differential privacy has seen significant advancements ...
research
01/30/2023

Equivariant Differentially Private Deep Learning

The formal privacy guarantee provided by Differential Privacy (DP) bound...
research
12/06/2022

Straggler-Resilient Differentially-Private Decentralized Learning

We consider the straggler problem in decentralized learning over a logic...
research
05/30/2022

Optimal and Adaptive Monteiro-Svaiter Acceleration

We develop a variant of the Monteiro-Svaiter (MS) acceleration framework...

Please sign up or login with your details

Forgot password? Click here to reset