Robustness to Programmable String Transformations via Augmented Abstract Training

02/22/2020
by   Yuhao Zhang, et al.
0

Deep neural networks for natural language processing tasks are vulnerable to adversarial input perturbations. In this paper, we present a versatile language for programmatically specifying string transformations – e.g., insertions, deletions, substitutions, swaps, etc. – that are relevant to the task at hand. We then present an approach to adversarially training models that are robust to such user-defined string transformations. Our approach combines the advantages of search-based techniques for adversarial training with abstraction-based techniques. Specifically, we show how to decompose a set of user-defined string transformations into two component specifications, one that benefits from search and another from abstraction. We use our technique to train models on the AG and SST2 datasets and show that the resulting models are robust to combinations of user-defined transformations mimicking spelling mistakes and other meaning-preserving transformations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/15/2021

Certified Robustness to Programmable Transformations in LSTMs

Deep neural networks for natural language processing are fragile in the ...
research
02/07/2020

Semantic Robustness of Models of Source Code

Deep neural networks are vulnerable to adversarial examples - small inpu...
research
05/11/2022

A New Class of String Transformations for Compressed Text Indexing

Introduced about thirty years ago in the field of Data Compression, the ...
research
04/22/2019

Using Videos to Evaluate Image Model Robustness

Human visual systems are robust to a wide range of image transformations...
research
09/03/2019

Certified Robustness to Adversarial Word Substitutions

State-of-the-art NLP models can often be fooled by adversaries that appl...
research
12/29/2022

Matchertext: Towards Verbatim Interlanguage Embedding

Embedding text in one language within text of another is commonplace for...
research
04/22/2019

Optimization + Abstraction: A Synergistic Approach for Analyzing Neural Network Robustness

In recent years, the notion of local robustness (or robustness for short...

Please sign up or login with your details

Forgot password? Click here to reset