The Interplay between Distribution Parameters and the Accuracy-Robustness Tradeoff in Classification

07/01/2021
by   Alireza Mousavi Hosseini, et al.
0

Adversarial training tends to result in models that are less accurate on natural (unperturbed) examples compared to standard models. This can be attributed to either an algorithmic shortcoming or a fundamental property of the training data distribution, which admits different solutions for optimal standard and adversarial classifiers. In this work, we focus on the latter case under a binary Gaussian mixture classification problem. Unlike earlier work, we aim to derive the natural accuracy gap between the optimal Bayes and adversarial classifiers, and study the effect of different distributional parameters, namely separation between class centroids, class proportions, and the covariance matrix, on the derived gap. We show that under certain conditions, the natural error of the optimal adversarial classifier, as well as the gap, are locally minimized when classes are balanced, contradicting the performance of the Bayes classifier where perfect balance induces the worst accuracy. Moreover, we show that with an ℓ_∞ bounded perturbation and an adversarial budget of ϵ, this gap is Θ(ϵ^2) for the worst-case parameters, which for suitably small ϵ indicates the theoretical possibility of achieving robust classifiers with near-perfect accuracy, which is rarely reflected in practical algorithms.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset