Quantifying Membership Inference Vulnerability via Generalization Gap and Other Model Metrics

09/11/2020
by   Jason W. Bentley, et al.
9

We demonstrate how a target model's generalization gap leads directly to an effective deterministic black box membership inference attack (MIA). This provides an upper bound on how secure a model can be to MIA based on a simple metric. Moreover, this attack is shown to be optimal in the expected sense given access to only certain likely obtainable metrics regarding the network's training and performance. Experimentally, this attack is shown to be comparable in accuracy to state-of-art MIAs in many cases.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/28/2018

Towards Demystifying Membership Inference Attacks

Membership inference attacks seek to infer membership of individual trai...
research
07/07/2023

Scalable Membership Inference Attacks via Quantile Regression

Membership inference attacks are designed to determine, using black box ...
research
02/27/2020

Membership Inference Attacks and Defenses in Supervised Learning via Generalization Gap

This work studies membership inference (MI) attack against classifiers, ...
research
08/29/2019

White-box vs Black-box: Bayes Optimal Strategies for Membership Inference

Membership inference determines, given a sample and trained parameters o...
research
09/14/2023

SLMIA-SR: Speaker-Level Membership Inference Attacks against Speaker Recognition Systems

Membership inference attacks allow adversaries to determine whether a pa...
research
02/13/2018

Understanding Membership Inferences on Well-Generalized Learning Models

Membership Inference Attack (MIA) determines the presence of a record in...
research
10/15/2018

Memory Vulnerability: A Case for Delaying Error Reporting

To face future reliability challenges, it is necessary to quantify the r...

Please sign up or login with your details

Forgot password? Click here to reset