Towards the Quantification of Safety Risks in Deep Neural Networks

by   Peipei Xu, et al.

Safety concerns on the deep neural networks (DNNs) have been raised when they are applied to critical sectors. In this paper, we define safety risks by requesting the alignment of the network's decision with human perception. To enable a general methodology for quantifying safety risks, we define a generic safety property and instantiate it to express various safety risks. For the quantification of risks, we take the maximum radius of safe norm balls, in which no safety risk exists. The computation of the maximum safe radius is reduced to the computation of their respective Lipschitz metrics - the quantities to be computed. In addition to the known adversarial example, reachability example, and invariant example, in this paper we identify a new class of risk - uncertainty example - on which humans can tell easily but the network is unsure. We develop an algorithm, inspired by derivative-free optimization techniques and accelerated by tensor-based parallelization on GPUs, to support efficient computation of the metrics. We perform evaluations on several benchmark neural networks, including ACSC-Xu, MNIST, CIFAR-10, and ImageNet networks. The experiments show that, our method can achieve competitive performance on safety quantification in terms of the tightness and the efficiency of computation. Importantly, as a generic approach, our method can work with a broad class of safety risks and without restrictions on the structure of neural networks.


page 3

page 12

page 13

page 14

page 15

page 16


Model-Agnostic Reachability Analysis on Deep Neural Networks

Verification plays an essential role in the formal analysis of safety-cr...

A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees

Despite the improved accuracy of deep neural networks, the discovery of ...

Global Robustness Evaluation of Deep Neural Networks with Provable Guarantees for L0 Norm

Deployment of deep neural networks (DNNs) in safety or security-critical...

Risk-perception-aware control design under dynamic spatial risks

This work proposes a novel risk-perception-aware (RPA) control design us...

Neural Network Repair with Reachability Analysis

Safety is a critical concern for the next generation of autonomy that is...

Concrete Safety for ML Problems: System Safety for ML Development and Assessment

Many stakeholders struggle to make reliances on ML-driven systems due to...

A Safety Assurable Human-Inspired Perception Architecture

Although artificial intelligence-based perception (AIP) using deep neura...

Please sign up or login with your details

Forgot password? Click here to reset