In Differential Privacy, There is Truth: On Vote Leakage in Ensemble Private Learning

by   Jiaqi Wang, et al.

When learning from sensitive data, care must be taken to ensure that training algorithms address privacy concerns. The canonical Private Aggregation of Teacher Ensembles, or PATE, computes output labels by aggregating the predictions of a (possibly distributed) collection of teacher models via a voting mechanism. The mechanism adds noise to attain a differential privacy guarantee with respect to the teachers' training data. In this work, we observe that this use of noise, which makes PATE predictions stochastic, enables new forms of leakage of sensitive information. For a given input, our adversary exploits this stochasticity to extract high-fidelity histograms of the votes submitted by the underlying teachers. From these histograms, the adversary can learn sensitive attributes of the input such as race, gender, or age. Although this attack does not directly violate the differential privacy guarantee, it clearly violates privacy norms and expectations, and would not be possible at all without the noise inserted to obtain differential privacy. In fact, counter-intuitively, the attack becomes easier as we add more noise to provide stronger differential privacy. We hope this encourages future work to consider privacy holistically rather than treat differential privacy as a panacea.


page 1

page 2

page 3

page 4


Gradient Leakage Attack Resilient Deep Learning

Gradient leakage attacks are considered one of the wickedest privacy thr...

Defensive Design of Saturating Counters Based on Differential Privacy

The saturating counter is the basic module of the dynamic branch predict...

Scalable Private Learning with PATE

The rapid adoption of machine learning has increased concerns about the ...

Does Label Differential Privacy Prevent Label Inference Attacks?

Label differential privacy (LDP) is a popular framework for training pri...

Fairly Private Through Group Tagging and Relation Impact

Privacy and Fairness both are very important nowadays. For most of the c...

The Dirichlet Mechanism for Differential Privacy on the Unit Simplex

As members of a network share more information with each other and netwo...

Interpreting Epsilon of Differential Privacy in Terms of Advantage in Guessing or Approximating Sensitive Attributes

There are numerous methods of achieving ϵ-differential privacy (DP). The...

Please sign up or login with your details

Forgot password? Click here to reset