Scaling provable adversarial defenses

05/31/2018
by   Eric Wong, et al.
0

Recent work has developed methods for learning deep network classifiers that are provably robust to norm-bounded adversarial perturbation; however, these methods are currently only possible for relatively small feedforward networks. In this paper, in an effort to scale these approaches to substantially larger models, we extend previous work in three main directions. First, we present a technique for extending these training procedures to much more general networks, with skip connections (such as ResNets) and general nonlinearities; the approach is fully modular, and can be implemented automatically (analogous to automatic differentiation). Second, in the specific case of ℓ_∞ adversarial perturbations and networks with ReLU nonlinearities, we adopt a nonlinear random projection for training, which scales linearly in the number of hidden units (previous approaches scaled quadratically). Third, we show how to further improve robust error through cascade models. On both MNIST and CIFAR data sets, we train classifiers that improve substantially on the state of the art in provable robust adversarial error bounds: from 5.8 (with ℓ_∞ perturbations of ϵ=0.1), and from 80 CIFAR (with ℓ_∞ perturbations of ϵ=2/255). Code for all experiments in the paper is available at https://github.com/locuslab/convex_adversarial/.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset