On the utility and protection of optimization with differential privacy and classic regularization techniques

09/07/2022
by   Eugenio Lomurno, et al.
0

Nowadays, owners and developers of deep learning models must consider stringent privacy-preservation rules of their training data, usually crowd-sourced and retaining sensitive information. The most widely adopted method to enforce privacy guarantees of a deep learning model nowadays relies on optimization techniques enforcing differential privacy. According to the literature, this approach has proven to be a successful defence against several models' privacy attacks, but its downside is a substantial degradation of the models' performance. In this work, we compare the effectiveness of the differentially-private stochastic gradient descent (DP-SGD) algorithm against standard optimization practices with regularization techniques. We analyze the resulting models' utility, training performance, and the effectiveness of membership inference and model inversion attacks against the learned models. Finally, we discuss differential privacy's flaws and limits and empirically demonstrate the often superior privacy-preserving properties of dropout and l2-regularization.

READ FULL TEXT

page 8

page 12

research
10/11/2021

Generalization Techniques Empirically Outperform Differential Privacy against Membership Inference

Differentially private training algorithms provide protection against on...
research
10/18/2022

DPIS: An Enhanced Mechanism for Differentially Private SGD with Importance Sampling

Nowadays, differential privacy (DP) has become a well-accepted standard ...
research
03/12/2021

Privacy Regularization: Joint Privacy-Utility Optimization in Language Models

Neural language models are known to have a high capacity for memorizatio...
research
07/11/2019

Amplifying Rényi Differential Privacy via Shuffling

Differential privacy is a useful tool to build machine learning models w...
research
06/05/2023

Discriminative Adversarial Privacy: Balancing Accuracy and Membership Privacy in Neural Networks

The remarkable proliferation of deep learning across various industries ...
research
05/30/2019

P3SGD: Patient Privacy Preserving SGD for Regularizing Deep CNNs in Pathological Image Classification

Recently, deep convolutional neural networks (CNNs) have achieved great ...
research
05/10/2023

DPMLBench: Holistic Evaluation of Differentially Private Machine Learning

Differential privacy (DP), as a rigorous mathematical definition quantif...

Please sign up or login with your details

Forgot password? Click here to reset