Combining Similarity and Adversarial Learning to Generate Visual Explanation: Application to Medical Image Classification

12/14/2020
by   Martin Charachon, et al.
0

Explaining decisions of black-box classifiers is paramount in sensitive domains such as medical imaging since clinicians confidence is necessary for adoption. Various explanation approaches have been proposed, among which perturbation based approaches are very promising. Within this class of methods, we leverage a learning framework to produce our visual explanations method. From a given classifier, we train two generators to produce from an input image the so called similar and adversarial images. The similar image shall be classified as the input image whereas the adversarial shall not. Visual explanation is built as the difference between these two generated images. Using metrics from the literature, our method outperforms state-of-the-art approaches. The proposed approach is model-agnostic and has a low computation burden at prediction time. Thus, it is adapted for real-time systems. Finally, we show that random geometric augmentations applied to the original image play a regularization role that improves several previously proposed explanation methods. We validate our approach on a large chest X-ray database.

READ FULL TEXT

page 6

page 7

research
10/02/2019

Contextual Local Explanation for Black Box Classifiers

We introduce a new model-agnostic explanation technique which explains t...
research
06/21/2021

Leveraging Conditional Generative Models in a General Explanation Framework of Classifier Decisions

Providing a human-understandable explanation of classifiers' decisions h...
research
01/11/2021

Explaining the Black-box Smoothly- A Counterfactual Approach

We propose a BlackBox Counterfactual Explainer that is explicitly develo...
research
11/01/2019

Explanation by Progressive Exaggeration

As machine learning methods see greater adoption and implementation in h...
research
04/13/2021

Fast Hierarchical Games for Image Explanations

As modern complex neural networks keep breaking records and solving hard...
research
11/18/2018

Regularized adversarial examples for model interpretability

As machine learning algorithms continue to improve, there is an increasi...
research
07/13/2020

Domain aware medical image classifier interpretation by counterfactual impact analysis

The success of machine learning methods for computer vision tasks has dr...

Please sign up or login with your details

Forgot password? Click here to reset