Exploiting Verified Neural Networks via Floating Point Numerical Error

03/06/2020
by   Kai Jia, et al.
0

We show how to construct adversarial examples for neural networks with exactly verified robustness against ℓ_∞-bounded input perturbations by exploiting floating point error. We argue that any exact verification of real-valued neural networks must accurately model the implementation details of any floating point arithmetic used during inference or verification.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset