DAWN: Dynamic Adversarial Watermarking of Neural Networks

06/03/2019
by   Sebastian Szyller, et al.
0

Training machine learning (ML) models is expensive in terms of computational power, large amounts of labeled data, and human expertise. Thus, ML models constitute intellectual property (IP) and business value for their owners. Embedding digital watermarks during model training allows a model owner to later identify their models in case of theft or misuse. However, model functionality can also be stolen via model extraction, where an adversary trains a surrogate model using results returned from a prediction API of the original model. Recent work has shown that model extraction is a realistic threat. Existing watermarking schemes are ineffective against IP theft via model extraction since it is the adversary who trains the surrogate model. In this paper, we introduce DAWN (Dynamic Adversarial Watermarking of Neural Networks), the first approach to use watermarking to deter IP theft via model extraction. Unlike prior watermarking schemes, DAWN does not impose changes to the training process. Instead, it operates at the prediction API of the protected model, by dynamically changing the responses for a small subset of queries (e.g. <0.5%) from API clients. This set represents a watermark that will be embedded in case a client uses its queries to train a surrogate model. We show that DAWN is resilient against two state-of-the-art model extraction attacks, effectively watermarking all extracted surrogate models, allowing model owners to reliably demonstrate ownership (with confidence >1-2^-64), incurring negligible loss of prediction accuracy (0.03-0.5%).

READ FULL TEXT
research
10/11/2019

Extraction of Complex DNN Models: Real Threat or Boogeyman?

Recently, machine learning (ML) has introduced advanced solutions to man...
research
07/27/2022

DynaMarks: Defending Against Deep Learning Model Extraction Using Dynamic Watermarking

The functionality of a deep learning (DL) model can be stolen via model ...
research
11/06/2017

Adversarial Frontier Stitching for Remote Neural Network Watermarking

The state of the art performance of deep learning models comes at a high...
research
05/07/2018

PRADA: Protecting against DNN Model Stealing Attacks

As machine learning (ML) applications become increasingly prevalent, pro...
research
07/21/2022

Careful What You Wish For: on the Extraction of Adversarially Trained Models

Recent attacks on Machine Learning (ML) models such as evasion attacks w...
research
11/24/2022

Seeds Don't Lie: An Adaptive Watermarking Framework for Computer Vision Models

In recent years, various watermarking methods were suggested to detect c...
research
10/06/2022

Bad Citrus: Reducing Adversarial Costs with Model Distances

Recent work by Jia et al., showed the possibility of effectively computi...

Please sign up or login with your details

Forgot password? Click here to reset