Thieves on Sesame Street! Model Extraction of BERT-based APIs

10/27/2019
by   Kalpesh Krishna, et al.
0

We study the problem of model extraction in natural language processing, in which an adversary with only query access to a victim model attempts to reconstruct a local copy of that model. Assuming that both the adversary and victim model fine-tune a large pretrained language model such as BERT (Devlin et al. 2019), we show that the adversary does not need any real training data to successfully mount the attack. In fact, the attacker need not even use grammatical or semantically meaningful queries: we show that random sequences of words coupled with task-specific heuristics form effective queries for model extraction on a diverse set of NLP tasks including natural language inference and question answering. Our work thus highlights an exploit only made feasible by the shift towards transfer learning methods within the NLP community: for a query budget of a few hundred dollars, an attacker can extract a model that performs only slightly worse than the victim model. Finally, we study two defense strategies against model extraction—membership classification and API watermarking—which while successful against naive adversaries, are ineffective against more sophisticated ones.

READ FULL TEXT
research
01/13/2019

Passage Re-ranking with BERT

Recently, neural models pretrained on a language modeling task, such as ...
research
03/18/2021

Model Extraction and Adversarial Transferability, Your BERT is Vulnerable!

Natural language processing (NLP) tasks, ranging from text classificatio...
research
08/30/2019

Adapt or Get Left Behind: Domain Adaptation through BERT Language Model Finetuning for Aspect-Target Sentiment Classification

Aspect-Target Sentiment Classification (ATSC) is a subtask of Aspect-Bas...
research
05/23/2021

Killing Two Birds with One Stone: Stealing Model and Inferring Attribute from BERT-based APIs

The advances in pre-trained models (e.g., BERT, XLNET and etc) have larg...
research
04/24/2020

Cross-lingual Information Retrieval with BERT

Multiple neural language models have been developed recently, e.g., BERT...
research
02/11/2019

BERT has a Mouth, and It Must Speak: BERT as a Markov Random Field Language Model

We show that BERT (Devlin et al., 2018) is a Markov random field languag...
research
10/14/2021

P-Adapters: Robustly Extracting Factual Information from Language Models with Diverse Prompts

Recent work (e.g. LAMA (Petroni et al., 2019)) has found that the qualit...

Please sign up or login with your details

Forgot password? Click here to reset