Formalizing Piecewise Affine Activation Functions of Neural Networks in Coq

01/30/2023
by   Andrei Aleksandrov, et al.
0

Verification of neural networks relies on activation functions being piecewise affine (pwa) – enabling an encoding of the verification problem for theorem provers. In this paper, we present the first formalization of pwa activation functions for an interactive theorem prover tailored to verifying neural networks within Coq using the library Coquelicot for real analysis. As a proof-of-concept, we construct the popular pwa activation function ReLU. We integrate our formalization into a Coq model of neural networks, and devise a verified transformation from a neural network N to a pwa function representing N by composing pwa functions that we construct for each layer. This representation enables encodings for proof automation, e.g. Coq's tactic lra – a decision procedure for linear real arithmetic. Further, our formalization paves the way for integrating Coq in frameworks of neural network verification as a fallback prover when automated proving fails.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro