Finding the right XAI method – A Guide for the Evaluation and Ranking of Explainable AI Methods in Climate Science

03/01/2023
by   Philine Bommer, et al.
0

Explainable artificial intelligence (XAI) methods shed light on the predictions of deep neural networks (DNNs). Several different approaches exist and have partly already been successfully applied in climate science. However, the often missing ground truth explanations complicate their evaluation and validation, subsequently compounding the choice of the XAI method. Therefore, in this work, we introduce XAI evaluation in the context of climate research and assess different desired explanation properties, namely, robustness, faithfulness, randomization, complexity, and localization. To this end we build upon previous work and train a multi-layer perceptron (MLP) and a convolutional neural network (CNN) to predict the decade based on annual-mean temperature maps. Next, multiple local XAI methods are applied and their performance is quantified for each evaluation property and compared against a baseline test. Independent of the network type, we find that the XAI methods Integrated Gradients, Layer-wise relevance propagation, and InputGradients exhibit considerable robustness, faithfulness, and complexity while sacrificing randomization. The opposite is true for Gradient, SmoothGrad, NoiseGrad, and FusionGrad. Notably, explanations using input perturbations, such as SmoothGrad and Integrated Gradients, do not improve robustness and faithfulness, contrary to previous claims. Overall, our experiments offer a comprehensive overview of different properties of explanation methods in the climate science context and supports users in the selection of a suitable XAI method.

READ FULL TEXT

page 3

page 22

page 23

research
03/16/2020

Towards Ground Truth Evaluation of Visual Explanations

Several methods have been proposed to explain the decisions of neural ne...
research
04/30/2022

Explainable Artificial Intelligence for Bayesian Neural Networks: Towards trustworthy predictions of ocean dynamics

The trustworthiness of neural networks is often challenged because they ...
research
09/07/2022

Explainable Artificial Intelligence to Detect Image Spam Using Convolutional Neural Network

Image spam threat detection has continually been a popular area of resea...
research
08/19/2022

Carefully choose the baseline: Lessons learned from applying XAI attribution methods for regression tasks in geoscience

Methods of eXplainable Artificial Intelligence (XAI) are used in geoscie...
research
09/22/2020

What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors

EXplainable AI (XAI) methods have been proposed to interpret how a deep ...
research
11/22/2022

Shortcomings of Top-Down Randomization-Based Sanity Checks for Evaluations of Deep Neural Network Explanations

While the evaluation of explanations is an important step towards trustw...

Please sign up or login with your details

Forgot password? Click here to reset