Black-box Model Inversion Attribute Inference Attacks on Classification Models

by   Shagufta Mehnaz, et al.

Increasing use of ML technologies in privacy-sensitive domains such as medical diagnoses, lifestyle predictions, and business decisions highlights the need to better understand if these ML technologies are introducing leakages of sensitive and proprietary training data. In this paper, we focus on one kind of model inversion attacks, where the adversary knows non-sensitive attributes about instances in the training data and aims to infer the value of a sensitive attribute unknown to the adversary, using oracle access to the target classification model. We devise two novel model inversion attribute inference attacks – confidence modeling-based attack and confidence score-based attack, and also extend our attack to the case where some of the other (non-sensitive) attributes are unknown to the adversary. Furthermore, while previous work uses accuracy as the metric to evaluate the effectiveness of attribute inference attacks, we find that accuracy is not informative when the sensitive attribute distribution is unbalanced. We identify two metrics that are better for evaluating attribute inference attacks, namely G-mean and Matthews correlation coefficient (MCC). We evaluate our attacks on two types of machine learning models, decision tree and deep neural network, trained with two real datasets. Experimental results show that our newly proposed attacks significantly outperform the state-of-the-art attacks. Moreover, we empirically show that specific groups in the training dataset (grouped by attributes, e.g., gender, race) could be more vulnerable to model inversion attacks. We also demonstrate that our attacks' performances are not impacted significantly when some of the other (non-sensitive) attributes are also unknown to the adversary.


Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models

Increasing use of machine learning (ML) technologies in privacy-sensitiv...

Are Attribute Inference Attacks Just Imputation?

Models can expose sensitive information about their training data. In an...

QuerySnout: Automating the Discovery of Attribute Inference Attacks against Query-Based Systems

Although query-based systems (QBS) have become one of the main solutions...

Inferring Sensitive Attributes from Model Explanations

Model explanations provide transparency into a trained machine learning ...

Reducing Risk of Model Inversion Using Privacy-Guided Training

Machine learning models often pose a threat to the privacy of individual...

Canary Extraction in Natural Language Understanding Models

Natural Language Understanding (NLU) models can be trained on sensitive ...

On the Importance of Encrypting Deep Features

In this study, we analyze model inversion attacks with only two assumpti...

Please sign up or login with your details

Forgot password? Click here to reset