Are Labels Required for Improving Adversarial Robustness?

05/31/2019
by   Jonathan Uesato, et al.
0

Recent work has uncovered the interesting (and somewhat surprising) finding that training models to be invariant to adversarial perturbations requires substantially larger datasets than those required for standard classification. This result is a key hurdle in the deployment of robust machine learning models in many real world applications where labeled data is expensive. Our main insight is that unlabeled data can be a competitive alternative to labeled data for training adversarially robust models. Theoretically, we show that in a simple statistical setting, the sample complexity for learning an adversarially robust model from unlabeled data matches the fully supervised case up to constant factors. On standard datasets like CIFAR-10, a simple Unsupervised Adversarial Training (UAT) approach using unlabeled data improves robust accuracy by 21.7 95 report an improvement of 4 against the strongest known attack by using additional unlabeled data from the uncurated 80 Million Tiny Images dataset. This demonstrates that our finding extends as well to the more realistic case where unlabeled data is also uncurated, therefore opening a new avenue for improving adversarial training.

READ FULL TEXT
research
11/20/2019

Where is the Bottleneck of Adversarial Learning with Unlabeled Data?

Deep neural networks (DNNs) are incredibly brittle due to adversarial ex...
research
05/31/2019

Unlabeled Data Improves Adversarial Robustness

We demonstrate, theoretically and empirically, that adversarial robustne...
research
06/03/2019

Adversarially Robust Generalization Just Requires More Unlabeled Data

Neural network robustness has recently been highlighted by the existence...
research
06/15/2020

Improving Adversarial Robustness via Unlabeled Out-of-Domain Data

Data augmentation by incorporating cheap unlabeled data from multiple do...
research
10/07/2020

Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples

Adversarial training and its variants have become de facto standards for...
research
02/14/2022

Unlabeled Data Help: Minimax Analysis and Adversarial Robustness

The recent proposed self-supervised learning (SSL) approaches successful...
research
08/15/2021

Training for the Future: A Simple Gradient Interpolation Loss to Generalize Along Time

In several real world applications, machine learning models are deployed...

Please sign up or login with your details

Forgot password? Click here to reset