SNAP: Efficient Extraction of Private Properties with Poisoning

08/25/2022
by   Harsh Chaudhari, et al.
0

Property inference attacks allow an adversary to extract global properties of the training dataset from a machine learning model. Such attacks have privacy implications for data owners who share their datasets to train machine learning models. Several existing approaches for property inference attacks against deep neural networks have been proposed, but they all rely on the attacker training a large number of shadow models, which induces large computational overhead. In this paper, we consider the setting of property inference attacks in which the attacker can poison a subset of the training dataset and query the trained target model. Motivated by our theoretical analysis of model confidences under poisoning, we design an efficient property inference attack, SNAP, which obtains higher attack success and requires lower amounts of poisoning than the state-of-the-art poisoning-based property inference attack by Mahloujifar et al. For example, on the Census dataset, SNAP achieves 34 than Mahloujifar et al. while being 56.5x faster. We also extend our attack to determine if a certain property is present at all in training, and estimate the exact proportion of a property of interest efficiently. We evaluate our attack on several properties of varying proportions from four datasets, and demonstrate SNAP's generality and effectiveness.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/18/2022

Property Unlearning: A Defense Strategy Against Property Inference Attacks

During the training of machine learning models, they may store or "learn...
research
04/27/2021

Property Inference Attacks on Convolutional Neural Networks: Influence and Implications of Target Model's Complexity

Machine learning models' goal is to make correct predictions for specifi...
research
12/16/2021

Dataset correlation inference attacks against machine learning models

Machine learning models are increasingly used by businesses and organiza...
research
06/07/2021

Formalizing Distribution Inference Risks

Property inference attacks reveal statistical properties about a trainin...
research
08/16/2021

NeuraCrypt is not private

NeuraCrypt (Yara et al. arXiv 2021) is an algorithm that converts a sens...
research
07/27/2022

DynaMarks: Defending Against Deep Learning Model Extraction Using Dynamic Watermarking

The functionality of a deep learning (DL) model can be stolen via model ...
research
05/20/2022

SafeNet: Mitigating Data Poisoning Attacks on Private Machine Learning

Secure multiparty computation (MPC) has been proposed to allow multiple ...

Please sign up or login with your details

Forgot password? Click here to reset