Efficient Neural Network Analysis with Sum-of-Infeasibilities

03/19/2022
by   Haoze Wu, et al.
49

Inspired by sum-of-infeasibilities methods in convex optimization, we propose a novel procedure for analyzing verification queries on neural networks with piecewise-linear activation functions. Given a convex relaxation which over-approximates the non-convex activation functions, we encode the violations of activation functions as a cost function and optimize it with respect to the convex relaxation. The cost function, referred to as the Sum-of-Infeasibilities (SoI), is designed so that its minimum is zero and achieved only if all the activation functions are satisfied. We propose a stochastic procedure, DeepSoI, to efficiently minimize the SoI. An extension to a canonical case-analysis-based complete search procedure can be achieved by replacing the convex procedure executed at each search state with DeepSoI. Extending the complete search with DeepSoI achieves multiple simultaneous goals: 1) it guides the search towards a counter-example; 2) it enables more informed branching decisions; and 3) it creates additional opportunities for bound derivation. An extensive evaluation across different benchmarks and solvers demonstrates the benefit of the proposed techniques. In particular, we demonstrate that SoI significantly improves the performance of an existing complete search procedure. Moreover, the SoI-based implementation outperforms other state-of-the-art complete verifiers. We also show that our technique can efficiently improve upon the perturbation bound derived by a recent adversarial attack algorithm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/27/2022

Neural Network Verification as Piecewise Linear Optimization: Formulations for the Composition of Staircase Functions

We present a technique for neural network verification using mixed-integ...
research
08/29/2022

Normalized Activation Function: Toward Better Convergence

Activation functions are essential for neural networks to introduce non-...
research
12/31/2018

Convex Relaxations of Convolutional Neural Nets

We propose convex relaxations for convolutional neural nets with one hid...
research
02/26/2018

A representer theorem for deep neural networks

We propose to optimize the activation functions of a deep neural network...
research
05/17/2016

Combinatorially Generated Piecewise Activation Functions

In the neuroevolution literature, research has primarily focused on evol...
research
04/30/2022

Complete Verification via Multi-Neuron Relaxation Guided Branch-and-Bound

State-of-the-art neural network verifiers are fundamentally based on one...
research
06/20/2018

Log-sum-exp neural networks and posynomial models for convex and log-log-convex data

We show that a one-layer feedforward neural network with exponential act...

Please sign up or login with your details

Forgot password? Click here to reset