Interpretable Explanations of Black Boxes by Meaningful Perturbation

04/11/2017
by   Ruth Fong, et al.
0

As machine learning algorithms are increasingly applied to high impact yet high risk tasks, e.g. problems in health, it is critical that researchers can explain how such algorithms arrived at their predictions. In recent years, a number of image saliency methods have been developed to summarize where highly complex neural networks "look" in an image for evidence for their predictions. However, these techniques are limited by their heuristic nature and architectural constraints. In this paper, we make two main contributions: First, we propose a general framework for learning different kinds of explanations for any black box algorithm. Second, we introduce a paradigm that learns the minimally salient part of an image by directly editing it and learning from the corresponding changes to its output. Unlike previous works, our method is model-agnostic and testable because it is grounded in replicable image perturbations.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset