Counterfactual Explanation and Instance-Generation using Cycle-Consistent Generative Adversarial Networks

by   Tehseen Zia, et al.

The image-based diagnosis is now a vital aspect of modern automation assisted diagnosis. To enable models to produce pixel-level diagnosis, pixel-level ground-truth labels are essentially required. However, since it is often not straight forward to obtain the labels in many application domains such as in medical image, classification-based approaches have become the de facto standard to perform the diagnosis. Though they can identify class-salient regions, they may not be useful for diagnosis where capturing all of the evidences is important requirement. Alternatively, a counterfactual explanation (CX) aims at providing explanations using a casual reasoning process of form "If X has not happend, Y would not heppend". Existing CX approaches, however, use classifier to explain features that can change its predictions. Thus, they can only explain class-salient features, rather than entire object of interest. This hence motivates us to propose a novel CX strategy that is not reliant on image classification. This work is inspired from the recent developments in generative adversarial networks (GANs) based image-to-image domain translation, and leverages to translate an abnormal image to counterpart normal image (i.e. counterfactual instance CI) to find discrepancy maps between the two. Since it is generally not possible to obtain abnormal and normal image pairs, we leverage Cycle-Consistency principle (a.k.a CycleGAN) to perform the translation in unsupervised way. We formulate CX in terms of a discrepancy map that, when added from the abnormal image, will make it indistinguishable from the CI. We evaluate our method on three datasets including a synthetic, tuberculosis and BraTS dataset. All these experiments confirm the supremacy of propose method in generating accurate CX and CI.


page 3

page 7

page 14

page 15

page 18

page 19

page 21

page 22


Explainable Image Classification with Evidence Counterfactual

The complexity of state-of-the-art modeling techniques for image classif...

Industrial and Medical Anomaly Detection Through Cycle-Consistent Adversarial Networks

In this study, a new Anomaly Detection (AD) approach for real-world imag...

An Adversarial Learning Approach to Medical Image Synthesis for Lesion Removal

The analysis of lesion within medical image data is desirable for effici...

Counterfactuals of Counterfactuals: a back-translation-inspired approach to analyse counterfactual editors

In the wake of responsible AI, interpretability methods, which attempt t...

Localisation of Mammographic masses by Greedy Backtracking of Activations in the Stacked Auto-Encoders

Mammographic image analysis requires accurate localisation of salient ma...

What Makes ImageNet Look Unlike LAION

ImageNet was famously created from Flickr image search results. What if ...

Please sign up or login with your details

Forgot password? Click here to reset