Reachability analysis of neural networks using mixed monotonicity

11/15/2021
by   Pierre-Jean Meyer, et al.
0

This paper presents a new reachability analysis tool to compute an interval over-approximation of the output set of a feedforward neural network under given input uncertainty. The proposed approach adapts to neural networks an existing mixed-monotonicity method for the reachability analysis of dynamical systems and applies it to all possible partial networks within the given neural network. This ensures that the intersection of the obtained results is the tightest interval over-approximation of the output of each layer that can be obtained using mixed-monotonicity. Unlike other tools in the literature that focus on small classes of piecewise-affine or monotone activation functions, the main strength of our approach is its generality in the sense that it can handle neural networks with any Lipschitz-continuous activation function. In addition, the simplicity of the proposed framework allows users to very easily add unimplemented activation functions, by simply providing the function, its derivative and the global extrema and corresponding arguments of the derivative. Our algorithm is tested and compared to five other interval-based tools on 1000 randomly generated neural networks for four activation functions (ReLU, TanH, ELU, SiLU). We show that our tool always outperforms the Interval Bound Propagation method and that we obtain tighter output bounds than ReluVal, Neurify, VeriNet and CROWN (when they are applicable) in 15 to 60 percent of cases.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset