Beyond Adversarial Training: Min-Max Optimization in Adversarial Attack and Defense

06/09/2019
by   Tianyun Zhang, et al.
5

The worst-case training principle that minimizes the maximal adversarial loss, also known as adversarial training (AT), has shown to be a state-of-the-art approach for enhancing adversarial robustness against norm-ball bounded input perturbations. Nonetheless, min-max optimization beyond the purpose of AT has not been rigorously explored in the research of adversarial attack and defense. In particular, given a set of risk sources (domains), minimizing the maximal loss induced from the domain set can be reformulated as a general min-max problem that is different from AT, since the maximization is taken over the probability simplex of the domain set. Examples of this general formulation include attacking model ensembles, devising universal perturbation to input samples or data transformations, and generalized AT over multiple norm-ball threat models. We show that these problems can be solved under a unified and theoretically principled min-max optimization framework. Our proposed approach leads to substantial performance improvement over the uniform averaging strategy in four different tasks. Moreover, we show how the self-adjusted weighting factors of the probability simplex from our proposed algorithms can be used to explain the importance of different attack and defense models.

READ FULL TEXT

page 21

page 22

research
02/09/2021

Provable Defense Against Delusive Poisoning

Delusive poisoning is a special kind of attack to obstruct learning, whe...
research
10/29/2017

Certifiable Distributional Robustness with Principled Adversarial Training

Neural networks are vulnerable to adversarial examples and researchers h...
research
02/17/2023

Revisiting adversarial training for the worst-performing class

Despite progress in adversarial training (AT), there is a substantial ga...
research
10/22/2020

Defense-guided Transferable Adversarial Attacks

Though deep neural networks perform challenging tasks excellently, they ...
research
10/10/2018

Is PGD-Adversarial Training Necessary? Alternative Training via a Soft-Quantization Network with Noisy-Natural Samples Only

Recent work on adversarial attack and defense suggests that PGD is a uni...
research
01/29/2021

Adversarial Learning with Cost-Sensitive Classes

It is necessary to improve the performance of some special classes or to...
research
03/02/2022

Enhancing Adversarial Robustness for Deep Metric Learning

Owing to security implications of adversarial vulnerability, adversarial...

Please sign up or login with your details

Forgot password? Click here to reset