Differentially Private Image Classification by Learning Priors from Random Processes

06/08/2023
by   Xinyu Tang, et al.
0

In privacy-preserving machine learning, differentially private stochastic gradient descent (DP-SGD) performs worse than SGD due to per-sample gradient clipping and noise addition. A recent focus in private learning research is improving the performance of DP-SGD on private data by incorporating priors that are learned on real-world public data. In this work, we explore how we can improve the privacy-utility tradeoff of DP-SGD by learning priors from images generated by random processes and transferring these priors to private data. We propose DP-RandP, a three-phase approach. We attain new state-of-the-art accuracy when training from scratch on CIFAR10, CIFAR100, and MedMNIST for a range of privacy budgets ε∈ [1, 8]. In particular, we improve the previous best reported accuracy on CIFAR10 from 60.6 % to 72.3 % for ε=1. Our code is available at https://github.com/inspire-group/DP-RandP.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/28/2022

Unlocking High-Accuracy Differentially Private Image Classification through Scale

Differential Privacy (DP) provides a formal privacy guarantee preventing...
research
06/28/2019

DP-LSSGD: A Stochastic Optimization Method to Lift the Utility in Privacy-Preserving ERM

Machine learning (ML) models trained by differentially private stochasti...
research
07/25/2023

Spectral-DP: Differentially Private Deep Learning through Spectral Perturbation and Filtering

Differential privacy is a widely accepted measure of privacy in the cont...
research
12/15/2021

One size does not fit all: Investigating strategies for differentially-private learning across NLP tasks

Preserving privacy in training modern NLP models comes at a cost. We kno...
research
06/13/2023

Safeguarding Data in Multimodal AI: A Differentially Private Approach to CLIP Training

The surge in multimodal AI's success has sparked concerns over data priv...
research
10/18/2020

Enabling Fast Differentially Private SGD via Just-in-Time Compilation and Vectorization

A common pain point in differentially private machine learning is the si...
research
10/12/2021

Not all noise is accounted equally: How differentially private learning benefits from large sampling rates

Learning often involves sensitive data and as such, privacy preserving e...

Please sign up or login with your details

Forgot password? Click here to reset