Model-Agnostic Reachability Analysis on Deep Neural Networks

by   Chi Zhang, et al.

Verification plays an essential role in the formal analysis of safety-critical systems. Most current verification methods have specific requirements when working on Deep Neural Networks (DNNs). They either target one particular network category, e.g., Feedforward Neural Networks (FNNs), or networks with specific activation functions, e.g., RdLU. In this paper, we develop a model-agnostic verification framework, called DeepAgn, and show that it can be applied to FNNs, Recurrent Neural Networks (RNNs), or a mixture of both. Under the assumption of Lipschitz continuity, DeepAgn analyses the reachability of DNNs based on a novel optimisation scheme with a global convergence guarantee. It does not require access to the network's internal structures, such as layers and parameters. Through reachability analysis, DeepAgn can tackle several well-known robustness problems, including computing the maximum safe radius for a given input, and generating the ground-truth adversarial examples. We also empirically demonstrate DeepAgn's superior capability and efficiency in handling a broader class of deep neural networks, including both FNNs, and RNNs with very deep layers and millions of neurons, than other state-of-the-art verification approaches.


page 1

page 2

page 3

page 4


Reachability Analysis of Deep Neural Networks with Provable Guarantees

Verifying correctness of deep neural networks (DNNs) is challenging. We ...

Data-Driven Assessment of Deep Neural Networks with Random Input Uncertainty

When using deep neural networks to operate safety-critical systems, asse...

Towards the Quantification of Safety Risks in Deep Neural Networks

Safety concerns on the deep neural networks (DNNs) have been raised when...

Reachability Analysis of Neural Network Control Systems

Neural network controllers (NNCs) have shown great promise in autonomous...

Neural Network Repair with Reachability Analysis

Safety is a critical concern for the next generation of autonomy that is...

Work In Progress: Safety and Robustness Verification of Autoencoder-Based Regression Models using the NNV Tool

This work in progress paper introduces robustness verification for autoe...

On Extensions of CLEVER: A Neural Network Robustness Evaluation Algorithm

CLEVER (Cross-Lipschitz Extreme Value for nEtwork Robustness) is an Extr...

Please sign up or login with your details

Forgot password? Click here to reset