From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation Framework

by   Yangyi Chen, et al.

Textual adversarial attacks can discover models' weaknesses by adding semantic-preserved but misleading perturbations to the inputs. The long-lasting adversarial attack-and-defense arms race in Natural Language Processing (NLP) is algorithm-centric, providing valuable techniques for automatic robustness evaluation. However, the existing practice of robustness evaluation may exhibit issues of incomprehensive evaluation, impractical evaluation protocol, and invalid adversarial samples. In this paper, we aim to set up a unified automatic robustness evaluation framework, shifting towards model-centric evaluation to further exploit the advantages of adversarial attacks. To address the above challenges, we first determine robustness evaluation dimensions based on model capabilities and specify the reasonable algorithm to generate adversarial samples for each dimension. Then we establish the evaluation protocol, including evaluation settings and metrics, under realistic demands. Finally, we use the perturbation degree of adversarial samples to control the sample validity. We implement a toolkit RobTest that realizes our automatic robustness evaluation framework. In our experiments, we conduct a robustness evaluation of RoBERTa models to demonstrate the effectiveness of our evaluation framework, and further show the rationality of each component in the framework. The code will be made public at <>.


page 13

page 17

page 18

page 19

page 20

page 21

page 22

page 23


Graph Robustness Benchmark: Benchmarking the Adversarial Robustness of Graph Machine Learning

Adversarial attacks on graphs have posed a major threat to the robustnes...

On Evaluation of Adversarial Perturbations for Sequence-to-Sequence Models

Adversarial examples --- perturbations to the input of a model that elic...

TextAttack: A Framework for Adversarial Attacks in Natural Language Processing

TextAttack is a library for running adversarial attacks against natural ...

Practical Evaluation of Adversarial Robustness via Adaptive Auto Attack

Defense models against adversarial attacks have grown significantly, but...

Performance Evaluation of Adversarial Attacks: Discrepancies and Solutions

Recently, adversarial attack methods have been developed to challenge th...

Distinguishing Non-natural from Natural Adversarial Samples for More Robust Pre-trained Language Model

Recently, the problem of robustness of pre-trained language models (PrLM...

Reliable Robustness Evaluation via Automatically Constructed Attack Ensembles

Attack Ensemble (AE), which combines multiple attacks together, provides...

Please sign up or login with your details

Forgot password? Click here to reset