PRADA: Protecting against DNN Model Stealing Attacks

05/07/2018
by   Mika Juuti, et al.
0

As machine learning (ML) applications become increasingly prevalent, protecting the confidentiality of ML models becomes paramount for two reasons: (a) models may constitute a business advantage to its owner, and (b) an adversary may use a stolen model to find transferable adversarial examples that can be used to evade classification by the original model. One way to protect model confidentiality is to limit access to the model only via well-defined prediction APIs. This is common not only in machine-learning-as-a-service (MLaaS) settings where the model is remote, but also in scenarios like autonomous driving where the model is local but direct access to it is protected, for example, by hardware security mechanisms. Nevertheless, prediction APIs still leak information so that it is possible to mount model extraction attacks by an adversary who repeatedly queries the model via the prediction API. In this paper, we describe a new model extraction attack by combining a novel approach for generating synthetic queries together with recent advances in training deep neural networks. This attack outperforms state-of-the-art model extraction techniques in terms of transferability of targeted adversarial examples generated using the extracted model (+15-30 percentage points, pp), and in prediction accuracy (+15-20 pp) on two datasets. We then propose the first generic approach to effectively detect model extraction attacks: PRADA. It analyzes how the distribution of consecutive queries to the model evolves over time and raises an alarm when there are abrupt deviations. We show that PRADA can detect all known model extraction attacks with a 100 suited for detecting extraction attacks against local models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/11/2019

Extraction of Complex DNN Models: Real Threat or Boogeyman?

Recently, machine learning (ML) has introduced advanced solutions to man...
research
07/21/2022

Careful What You Wish For: on the Extraction of Adversarially Trained Models

Recent attacks on Machine Learning (ML) models such as evasion attacks w...
research
06/20/2023

FDINet: Protecting against DNN Model Extraction via Feature Distortion Index

Machine Learning as a Service (MLaaS) platforms have gained popularity d...
research
11/06/2017

Adversarial Frontier Stitching for Remote Neural Network Watermarking

The state of the art performance of deep learning models comes at a high...
research
02/21/2022

HoneyModels: Machine Learning Honeypots

Machine Learning is becoming a pivotal aspect of many systems today, off...
research
06/03/2019

DAWN: Dynamic Adversarial Watermarking of Neural Networks

Training machine learning (ML) models is expensive in terms of computati...
research
06/21/2021

Hardness of Samples Is All You Need: Protecting Deep Learning Models Using Hardness of Samples

Several recent studies have shown that Deep Neural Network (DNN)-based c...

Please sign up or login with your details

Forgot password? Click here to reset