Towards Compositional Adversarial Robustness: Generalizing Adversarial Training to Composite Semantic Perturbations
Model robustness against adversarial examples of single perturbation type such as the ℓ_p-norm has been widely studied, yet its generalization to more realistic scenarios involving multiple semantic perturbations and their composition remains largely unexplored. In this paper, we firstly propose a novel method for generating composite adversarial examples. By utilizing component-wise projected gradient descent and automatic attack-order scheduling, our method can find the optimal attack composition. We then propose generalized adversarial training (GAT) to extend model robustness from ℓ_p-norm to composite semantic perturbations, such as the combination of Hue, Saturation, Brightness, Contrast, and Rotation. The results on ImageNet and CIFAR-10 datasets show that GAT can be robust not only to any single attack but also to any combination of multiple attacks. GAT also outperforms baseline ℓ_∞-norm bounded adversarial training approaches by a significant margin.
READ FULL TEXT