Can Adversarial Training Be Manipulated By Non-Robust Features?

01/31/2022
by   Lue Tao, et al.
0

Adversarial training, originally designed to resist test-time adversarial examples, has shown to be promising in mitigating training-time availability attacks. This defense ability, however, is challenged in this paper. We identify a novel threat model named stability attacks, which aims to hinder robust availability by slightly perturbing the training data. Under this threat, we find that adversarial training using a conventional defense budget ϵ provably fails to provide test robustness in a simple statistical setting when the non-robust features of the training data are reinforced by ϵ-bounded perturbation. Further, we analyze the necessity of enlarging the defense budget to counter stability attacks. Finally, comprehensive experiments demonstrate that stability attacks are harmful on benchmark datasets, and thus the adaptive defense is necessary to maintain robustness.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset