Explainable AI, but explainable to whom?

by   Julie Gerlings, et al.

Advances in AI technologies have resulted in superior levels of AI-based model performance. However, this has also led to a greater degree of model complexity, resulting in 'black box' models. In response to the AI black box problem, the field of explainable AI (xAI) has emerged with the aim of providing explanations catered to human understanding, trust, and transparency. Yet, we still have a limited understanding of how xAI addresses the need for explainable AI in the context of healthcare. Our research explores the differing explanation needs amongst stakeholders during the development of an AI-system for classifying COVID-19 patients for the ICU. We demonstrate that there is a constellation of stakeholders who have different explanation needs, not just the 'user'. Further, the findings demonstrate how the need for xAI emerges through concerns associated with specific stakeholder groups i.e., the development team, subject matter experts, decision makers, and the audience. Our findings contribute to the expansion of xAI by highlighting that different stakeholders have different explanation needs. From a practical perspective, the study provides insights on how AI systems can be adjusted to support different stakeholders needs, ensuring better implementation and operation in a healthcare context.


page 12

page 13


Identifying Explanation Needs of End-users: Applying and Extending the XAI Question Bank

Explanations in XAI are typically developed by AI experts and focus on a...

FairLens: Auditing Black-box Clinical Decision Support Systems

The pervasive application of algorithmic decision-making is raising conc...

Painting the black box white: experimental findings from applying XAI to an ECG reading setting

The shift from symbolic AI systems to black-box, sub-symbolic, and stati...

Introducing and assessing the explainable AI (XAI)method: SIDU

Explainable Artificial Intelligence (XAI) has in recent years become a w...

Do We Need Explainable AI in Companies? Investigation of Challenges, Expectations, and Chances from Employees' Perspective

By using AI, companies want to improve their business success and innova...

BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations

A key impediment to the use of AI is the lacking of transparency, especi...

What Do End-Users Really Want? Investigation of Human-Centered XAI for Mobile Health Apps

In healthcare, AI systems support clinicians and patients in diagnosis, ...

Please sign up or login with your details

Forgot password? Click here to reset