In and Out-of-Domain Text Adversarial Robustness via Label Smoothing

by   Yahan Yang, et al.
University of Pennsylvania

Recently it has been shown that state-of-the-art NLP models are vulnerable to adversarial attacks, where the predictions of a model can be drastically altered by slight modifications to the input (such as synonym substitutions). While several defense techniques have been proposed, and adapted, to the discrete nature of text adversarial attacks, the benefits of general-purpose regularization methods such as label smoothing for language models, have not been studied. In this paper, we study the adversarial robustness provided by various label smoothing strategies in foundational models for diverse NLP tasks in both in-domain and out-of-domain settings. Our experiments show that label smoothing significantly improves adversarial robustness in pre-trained models like BERT, against various popular attacks. We also analyze the relationship between prediction confidence and robustness, showing that label smoothing reduces over-confident errors on adversarial examples.


page 1

page 2

page 3

page 4


Label Smoothing and Adversarial Robustness

Recent studies indicate that current adversarial attack methods are flaw...

Adversarial Robustness via Adversarial Label-Smoothing

We study Label-Smoothing as a means for improving adversarial robustness...

Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks

The language models, especially the basic text classification models, ha...

Randomized Smoothing with Masked Inference for Adversarially Robust Text Classifications

Large-scale pre-trained language models have shown outstanding performan...

Temporal Label Smoothing for Early Prediction of Adverse Events

Models that can predict adverse events ahead of time with low false-alar...

SAFER: A Structure-free Approach for Certified Robustness to Adversarial Word Substitutions

State-of-the-art NLP models can often be fooled by human-unaware transfo...

Can Rationalization Improve Robustness?

A growing line of work has investigated the development of neural NLP mo...

Please sign up or login with your details

Forgot password? Click here to reset