Global Robustness Evaluation of Deep Neural Networks with Provable Guarantees for L0 Norm

by   Wenjie Ruan, et al.

Deployment of deep neural networks (DNNs) in safety or security-critical systems demands provable guarantees on their correct behaviour. One example is the robustness of image classification decisions, defined as the invariance of the classification for a given input over a small neighbourhood of images around the input. Here we focus on the L_0 norm, and study the problem of quantifying the global robustness of a trained DNN, where global robustness is defined as the expectation of the maximum safe radius over a testing dataset. We first show that the problem is NP-hard, and then propose an approach to iteratively generate lower and upper bounds on the network's robustness. The approach is anytime, i.e., it returns intermediate bounds and robustness estimates that are gradually, but strictly, improved as the computation proceeds; tensor-based, i.e., the computation is conducted over a set of inputs simultaneously, instead of one by one, to enable efficient GPU computation; and has provable guarantees, i.e., both the bounds and the robustness estimates can converge to their optimal values. Finally, we demonstrate the utility of the proposed approach in practice to compute tight bounds by applying and adapting the anytime algorithm to a set of challenging problems, including global robustness evaluation, guidance for the design of robust DNNs, competitive L_0 attacks, generation of saliency maps for model interpretability, and test generation for DNNs. We release the code of all case studies via Github.


page 14

page 26

page 33

page 34

page 35

page 36

page 37

page 38


A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees

Despite the improved accuracy of deep neural networks, the discovery of ...

gRoMA: a Tool for Measuring Deep Neural Networks Global Robustness

Deep neural networks (DNNs) are a state-of-the-art technology, capable o...

Robustness Guarantees for Deep Neural Networks on Videos

The widespread adoption of deep learning models places demands on their ...

Enumerating Safe Regions in Deep Neural Networks with Provable Probabilistic Guarantees

Identifying safe areas is a key point to guarantee trust for systems tha...

Towards the Quantification of Safety Risks in Deep Neural Networks

Safety concerns on the deep neural networks (DNNs) have been raised when...

Adversarial Robustness Guarantees for Gaussian Processes

Gaussian processes (GPs) enable principled computation of model uncertai...

On the Minimal Adversarial Perturbation for Deep Neural Networks with Provable Estimation Error

Although Deep Neural Networks (DNNs) have shown incredible performance i...

Please sign up or login with your details

Forgot password? Click here to reset