A Rigorous Study of Integrated Gradients Method and Extensions to Internal Neuron Attributions

02/24/2022
by   Daniel Lundstrom, et al.
0

As the efficacy of deep learning (DL) grows, so do concerns about the lack of transparency of these black-box models. Attribution methods aim to improve transparency of DL models by quantifying an input feature's importance to a model's prediction. The method of Integrated gradients (IG) sets itself apart by claiming other methods failed to satisfy desirable axioms, while IG and methods like it uniquely satisfied said axioms. This paper comments on fundamental aspects of IG and its applications/extensions: 1) We identify key unaddressed differences between DL-attribution function spaces and the supporting literature's function spaces which problematize previous claims of IG uniqueness. We show that with the introduction of an additional axiom, non-decreasing positivity, the uniqueness claim can be established. 2) We address the question of input sensitivity by identifying function spaces where the IG is/is not Lipschitz continuous in the attributed input. 3) We show how axioms for single-baseline methods in IG impart analogous properties for methods where the baseline is a probability distribution over the input sample space. 4) We introduce a means of decomposing the IG map with respect to a layer of internal neurons while simultaneously gaining internal-neuron attributions. Finally, we present experimental results validating the decomposition and internal neuron attributions.

READ FULL TEXT

page 10

page 18

research
06/23/2023

Four Axiomatic Characterizations of the Integrated Gradients Attribution Method

Deep neural networks have produced significant progress among machine le...
research
07/26/2018

Computationally Efficient Measures of Internal Neuron Importance

The challenge of assigning importance to individual neurons in a network...
research
03/25/2021

Symmetry-Preserving Paths in Integrated Gradients

We provide rigorous proofs that the Integrated Gradients (IG) attributio...
research
08/18/2023

On Gradient-like Explanation under a Black-box Setting: When Black-box Explanations Become as Good as White-box

Attribution methods shed light on the explainability of data-driven appr...
research
01/04/2021

On Baselines for Local Feature Attributions

High-performing predictive models, such as neural nets, usually operate ...
research
05/30/2018

How Important Is a Neuron?

The problem of attributing a deep network's prediction to its input/base...

Please sign up or login with your details

Forgot password? Click here to reset