Distributionally Adversarial Attack
Recent work on adversarial attack has shown that Projected Gradient Descent (PGD) Adversary is a universal first-order adversary, and the classifier adversarially trained by PGD is robust against a wide range of first-order attacks. However, it is worth noting that the objective of an attacking/defense model relies on a data distribution, typically in the form of risk maximization/minimization: /E_p(x)L(x), with p(x) the data distribution and L(·) a loss function. While PGD generates attack samples independently for each data point, the procedure does not necessary lead to good generalization in terms of risk maximization. In the paper, we achieve the goal by proposing distributionally adversarial attack (DAA), a framework to solve an optimal adversarial data distribution, a perturbed distribution that is close to the original data distribution but increases the generalization risk maximally. Algorithmically, DAA performs optimization on the space of probability measures, which introduces direct dependency between all data points when generating adversarial samples. DAA is evaluated by attacking state-of-the-art defense models, including the adversarially trained models provided by MadryLab. Notably, DAA outperforms all the attack algorithms listed in MadryLab's white-box leaderboard, reducing the accuracy of their secret MNIST model to 88.79% (with l_∞ perturbations of ϵ = 0.3) and the accuracy of their secret CIFAR model to 44.73% (with l_∞ perturbations of ϵ = 8.0). Code for the experiments is released on https://github.com/tianzheng4/Distributionally-Adversarial-Attack
READ FULL TEXT