Unwrapping All ReLU Networks

05/16/2023
by   Mattia Jacopo Villani, et al.
0

Deep ReLU Networks can be decomposed into a collection of linear models, each defined in a region of a partition of the input space. This paper provides three results extending this theory. First, we extend this linear decompositions to Graph Neural networks and tensor convolutional networks, as well as networks with multiplicative interactions. Second, we provide proofs that neural networks can be understood as interpretable models such as Multivariate Decision trees and logical theories. Finally, we show how this model leads to computing cheap and exact SHAP values. We validate the theory through experiments with on Graph Neural Networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/26/2020

On the Stability of Graph Convolutional Neural Networks under Edge Rewiring

Graph neural networks are experiencing a surge of popularity within the ...
research
10/09/2019

A Simple Proof of the Universality of Invariant/Equivariant Graph Neural Networks

We present a simple proof for the universality of invariant and equivari...
research
08/25/2023

Six Lectures on Linearized Neural Networks

In these six lectures, we examine what can be learnt about the behavior ...
research
06/30/2023

ReLU Neural Networks, Polyhedral Decompositions, and Persistent Homolog

A ReLU neural network leads to a finite polyhedral decomposition of inpu...
research
02/14/2016

Benefits of depth in neural networks

For any positive integer k, there exist neural networks with Θ(k^3) laye...
research
08/17/2023

Interpretable Graph Neural Networks for Tabular Data

Data in tabular format is frequently occurring in real-world application...
research
02/28/2023

A multivariate Riesz basis of ReLU neural networks

We consider the trigonometric-like system of piecewise linear functions ...

Please sign up or login with your details

Forgot password? Click here to reset