Marich: A Query-efficient Distributionally Equivalent Model Extraction Attack using Public Data

by   Pratik Karmakar, et al.

We study black-box model stealing attacks where the attacker can query a machine learning model only through publicly available APIs. Specifically, our aim is to design a black-box model extraction attack that uses minimal number of queries to create an informative and distributionally equivalent replica of the target model. First, we define distributionally equivalent and max-information model extraction attacks. Then, we reduce both the attacks into a variational optimisation problem. The attacker solves this problem to select the most informative queries that simultaneously maximise the entropy and reduce the mismatch between the target and the stolen models. This leads us to an active sampling-based query selection algorithm, Marich. We evaluate Marich on different text and image data sets, and different models, including BERT and ResNet18. Marich is able to extract models that achieve 69-96% of true model's accuracy and uses 1,070 - 6,950 samples from the publicly available query datasets, which are different from the private training datasets. Models extracted by Marich yield prediction distributions, which are ∼2-4× closer to the target's distribution in comparison to the existing active sampling-based algorithms. The extracted models also lead to 85-95% accuracy under membership inference attacks. Experimental results validate that Marich is query-efficient, and also capable of performing task-accurate, high-fidelity, and informative model extraction.


page 1

page 2

page 3

page 4


Active Learning for Black-Box Adversarial Attacks in EEG-Based Brain-Computer Interfaces

Deep learning has made significant breakthroughs in many fields, includi...

Query-Efficient Black-Box Attack by Active Learning

Deep neural network (DNN) as a popular machine learning model is found t...

First to Possess His Statistics: Data-Free Model Extraction Attack on Tabular Data

Model extraction attacks are a kind of attacks where an adversary obtain...

Pareto-Secure Machine Learning (PSML): Fingerprinting and Securing Inference Serving Systems

With the emergence of large foundational models, model-serving systems a...

High-Fidelity Extraction of Neural Network Models

Model extraction allows an adversary to steal a copy of a remotely deplo...

Extracting Cloud-based Model with Prior Knowledge

Machine Learning-as-a-Service, a pay-as-you-go business pattern, is wide...

DualCF: Efficient Model Extraction Attack from Counterfactual Explanations

Cloud service providers have launched Machine-Learning-as-a-Service (MLa...

Please sign up or login with your details

Forgot password? Click here to reset