Pool Inference Attacks on Local Differential Privacy: Quantifying the Privacy Guarantees of Apple's Count Mean Sketch in Practice

04/14/2023
by   Andrea Gadotti, et al.
0

Behavioral data generated by users' devices, ranging from emoji use to pages visited, are collected at scale to improve apps and services. These data, however, contain fine-grained records and can reveal sensitive information about individual users. Local differential privacy has been used by companies as a solution to collect data from users while preserving privacy. We here first introduce pool inference attacks, where an adversary has access to a user's obfuscated data, defines pools of objects, and exploits the user's polarized behavior in multiple data collections to infer the user's preferred pool. Second, we instantiate this attack against Count Mean Sketch, a local differential privacy mechanism proposed by Apple and deployed in iOS and Mac OS devices, using a Bayesian model. Using Apple's parameters for the privacy loss ε, we then consider two specific attacks: one in the emojis setting – where an adversary aims at inferring a user's preferred skin tone for emojis – and one against visited websites – where an adversary wants to learn the political orientation of a user from the news websites they visit. In both cases, we show the attack to be much more effective than a random guess when the adversary collects enough data. We find that users with high polarization and relevant interest are significantly more vulnerable, and we show that our attack is well-calibrated, allowing the adversary to target such vulnerable users. We finally validate our results for the emojis setting using user data from Twitter. Taken together, our results show that pool inference attacks are a concern for data protected by local differential privacy mechanisms with a large ε, emphasizing the need for additional technical safeguards and the need for more research on how to apply local differential privacy for multiple collections.

READ FULL TEXT

page 12

page 16

research
05/03/2018

Metric-based local differential privacy for statistical applications

Local differential privacy (LPD) is a distributed variant of differentia...
research
05/22/2023

Analyzing the Shuffle Model through the Lens of Quantitative Information Flow

Local differential privacy (LDP) is a variant of differential privacy (D...
research
06/16/2020

Building a Collaborative Phone Blacklisting System with Local Differential Privacy

Spam phone calls have been rapidly growing from nuisance to an increasin...
research
05/24/2022

Fine-grained Poisoning Attacks to Local Differential Privacy Protocols for Mean and Variance Estimation

Local differential privacy (LDP) protects individual data contributors a...
research
03/04/2021

Quantifying identifiability to choose and audit ε in differentially private deep learning

Differential privacy allows bounding the influence that training data re...
research
10/02/2020

Quantifying Privacy Leakage in Graph Embedding

Graph embeddings have been proposed to map graph data to low dimensional...
research
06/11/2021

A Shuffling Framework for Local Differential Privacy

ldp deployments are vulnerable to inference attacks as an adversary can ...

Please sign up or login with your details

Forgot password? Click here to reset