On Gradient-like Explanation under a Black-box Setting: When Black-box Explanations Become as Good as White-box

by   Yi Cai, et al.

Attribution methods shed light on the explainability of data-driven approaches such as deep learning models by revealing the most contributing features to decisions that have been made. A widely accepted way of deriving feature attributions is to analyze the gradients of the target function with respect to input features. Analysis of gradients requires full access to the target system, meaning that solutions of this kind treat the target system as a white-box. However, the white-box assumption may be untenable due to security and safety concerns, thus limiting their practical applications. As an answer to the limited flexibility, this paper presents GEEX (gradient-estimation-based explanation), an explanation method that delivers gradient-like explanations under a black-box setting. Furthermore, we integrate the proposed method with a path method. The resulting approach iGEEX (integrated GEEX) satisfies the four fundamental axioms of attribution methods: sensitivity, insensitivity, implementation invariance, and linearity. With a focus on image data, the exhaustive experiments empirically show that the proposed methods outperform state-of-the-art black-box methods and achieve competitive performance compared to the ones with full access.


page 4

page 5

page 7

page 14

page 15

page 16

page 17


Don't Paint It Black: White-Box Explanations for Deep Learning in Computer Security

Deep learning is increasingly used as a basic building block of security...

Foiling Explanations in Deep Neural Networks

Deep neural networks (DNNs) have greatly impacted numerous fields over t...

Evaluating Attribution Methods using White-Box LSTMs

Interpretability methods for neural networks are difficult to evaluate b...

Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis

We describe a novel attribution method which is grounded in Sensitivity ...

Sound Explanation for Trustworthy Machine Learning

We take a formal approach to the explainability problem of machine learn...

Generative Perturbation Analysis for Probabilistic Black-Box Anomaly Attribution

We address the task of probabilistic anomaly attribution in the black-bo...

A Rigorous Study of Integrated Gradients Method and Extensions to Internal Neuron Attributions

As the efficacy of deep learning (DL) grows, so do concerns about the la...

Please sign up or login with your details

Forgot password? Click here to reset