When Relaxations Go Bad: "Differentially-Private" Machine Learning

02/24/2019
by   Bargav Jayaraman, et al.
4

Differential privacy is becoming a standard notion for performing privacy-preserving machine learning over sensitive data. It provides formal guarantees, in terms of the privacy budget, ϵ, on how much information about individual training records is leaked by the model. While the privacy budget is directly correlated to the privacy leakage, the calibration of the privacy budget is not well understood. As a result, many existing works on privacy-preserving machine learning select large values of ϵ in order to get acceptable utility of the model, with little understanding of the concrete impact of such choices on meaningful privacy. Moreover, in scenarios where iterative learning procedures are used which require privacy guarantees for each iteration, relaxed definitions of differential privacy are often used which further tradeoff privacy for better utility. In this paper, we evaluate the impacts of these choices on privacy in experiments with logistic regression and neural network models. We quantify the privacy leakage in terms of advantage of the adversary performing inference attacks and by analyzing the number of members at risk for exposure. Our main findings are that current mechanisms for differential privacy for machine learning rarely offer acceptable utility-privacy tradeoffs: settings that provide limited accuracy loss provide little effective privacy, and settings that provide strong privacy result in useless models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/14/2020

LinkedIn's Audience Engagements API: A Privacy Preserving Data Analytics System at Scale

We present a privacy system that leverages differential privacy to prote...
research
08/27/2020

Every Query Counts: Analyzing the Privacy Loss of Exploratory Data Analyses

An exploratory data analysis is an essential step for every data analyst...
research
08/08/2019

That which we call private

A casual reader of the study by Jayaraman and Evans in USENIX Security 2...
research
03/04/2021

Quantifying identifiability to choose and audit ε in differentially private deep learning

Differential privacy allows bounding the influence that training data re...
research
02/15/2023

Tight Auditing of Differentially Private Machine Learning

Auditing mechanisms for differential privacy use probabilistic means to ...
research
02/26/2023

P4L: Privacy Preserving Peer-to-Peer Learning for Infrastructureless Setups

Distributed (or Federated) learning enables users to train machine learn...
research
03/02/2020

Differential Privacy at Risk: Bridging Randomness and Privacy Budget

The calibration of noise for a privacy-preserving mechanism depends on t...

Please sign up or login with your details

Forgot password? Click here to reset