Evaluating Adversarial Robustness with Expected Viable Performance

09/18/2023
by   Ryan McCoppin, et al.
0

We introduce a metric for evaluating the robustness of a classifier, with particular attention to adversarial perturbations, in terms of expected functionality with respect to possible adversarial perturbations. A classifier is assumed to be non-functional (that is, has a functionality of zero) with respect to a perturbation bound if a conventional measure of performance, such as classification accuracy, is less than a minimally viable threshold when the classifier is tested on examples from that perturbation bound. Defining robustness in terms of an expected value is motivated by a domain general approach to robustness quantification.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset