Metrics for saliency map evaluation of deep learning explanation methods

by   Tristan Gomez, et al.

Due to the black-box nature of deep learning models, there is a recent development of solutions for visual explanations of CNNs. Given the high cost of user studies, metrics are necessary to compare and evaluate these different methods. In this paper, we critically analyze the Deletion Area Under Curve (DAUC) and Insertion Area Under Curve (IAUC) metrics proposed by Petsiuk et al. (2018). These metrics were designed to evaluate the faithfulness of saliency maps generated by generic methods such as Grad-CAM or RISE. First, we show that the actual saliency score values given by the saliency map are ignored as only the ranking of the scores is taken into account. This shows that these metrics are insufficient by themselves, as the visual appearance of a saliency map can change significantly without the ranking of the scores being modified. Secondly, we argue that during the computation of DAUC and IAUC, the model is presented with images that are out of the training distribution which might lead to an unreliable behavior of the model being explained. that one can drastically change the visual appearance of an explanation map without changing the pixel ranking, i.e. without changing the DAUC and IAUC values. and ignore the score values. To complement DAUC/IAUC, we propose new metrics that quantify the sparsity and the calibration of explanation methods, two previously unstudied properties. Finally, we give general remarks about the metrics studied in this paper and discuss how to evaluate them in a user study.


Towards Visual Saliency Explanations of Face Recognition

Deep convolutional neural networks have been pushing the frontier of fac...

Sanity Checks for Saliency Metrics

Saliency maps are a popular approach to creating post-hoc explanations o...

Evaluating Saliency Map Explanations for Convolutional Neural Networks: A User Study

Convolutional neural networks (CNNs) offer great machine learning perfor...

BOREx: Bayesian-Optimization–Based Refinement of Saliency Map for Image- and Video-Classification Models

Explaining a classification result produced by an image- and video-class...

Revisiting Saliency Metrics: Farthest-Neighbor Area Under Curve

Saliency detection has been widely studied because it plays an important...

Visualizing Color-wise Saliency of Black-Box Image Classification Models

Image classification based on machine learning is being commonly used. H...

Quantitative Evaluations on Saliency Methods: An Experimental Study

It has been long debated that eXplainable AI (XAI) is an important topic...

Please sign up or login with your details

Forgot password? Click here to reset