Can Users Correctly Interpret Machine Learning Explanations and Simultaneously Identify Their Limitations?

09/15/2023
by   Yueqing Xuan, et al.
0

Automated decision-making systems are becoming increasingly ubiquitous, motivating an immediate need for their explainability. However, it remains unclear whether users know what insights an explanation offers and, more importantly, what information it lacks. We conducted an online study with 200 participants to assess explainees' ability to realise known and unknown information for four representative explanations: transparent modelling, decision boundary visualisation, counterfactual explainability and feature importance. Our findings demonstrate that feature importance and decision boundary visualisation are the most comprehensible, but their limitations are not necessarily recognised by the users. In addition, correct interpretation of an explanation – i.e., understanding known information – is accompanied by high confidence, but a failure to gauge its limits – thus grasp unknown information – yields overconfidence; the latter phenomenon is especially prominent for feature importance and transparent modelling. Machine learning explanations should therefore embrace their richness and limitations to maximise understanding and curb misinterpretation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/21/2023

Predictability and Comprehensibility in Post-Hoc XAI Methods: A User-Centered Analysis

Post-hoc explainability methods aim to clarify predictions of black-box ...
research
10/22/2021

Mechanistic Interpretation of Machine Learning Inference: A Fuzzy Feature Importance Fusion Approach

With the widespread use of machine learning to support decision-making, ...
research
05/06/2022

The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations

Machine learning models in safety-critical settings like healthcare are ...
research
10/20/2020

Counterfactual Explanations for Machine Learning: A Review

Machine learning plays a role in many deployed decision systems, often i...
research
08/08/2022

An Empirical Evaluation of Predicted Outcomes as Explanations in Human-AI Decision-Making

In this work, we empirically examine human-AI decision-making in the pre...
research
06/17/2022

Explainability's Gain is Optimality's Loss? – How Explanations Bias Decision-making

Decisions in organizations are about evaluating alternatives and choosin...
research
11/23/2022

Reconnoitering the class distinguishing abilities of the features, to know them better

The relevance of machine learning (ML) in our daily lives is closely int...

Please sign up or login with your details

Forgot password? Click here to reset