Explanation Ontology in Action: A Clinical Use-Case

10/04/2020
by   Shruthi Chari, et al.
17

We addressed the problem of a lack of semantic representation for user-centric explanations and different explanation types in our Explanation Ontology (https://purl.org/heals/eo). Such a representation is increasingly necessary as explainability has become an important problem in Artificial Intelligence with the emergence of complex methods and an uptake in high-precision and user-facing settings. In this submission, we provide step-by-step guidance for system designers to utilize our ontology, introduced in our resource track paper, to plan and model for explanations during the design of their Artificial Intelligence systems. We also provide a detailed example with our utilization of this guidance in a clinical setting.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/04/2020

Explanation Ontology: A Model of Explanations for User-Centered AI

Explainability has been a goal for Artificial Intelligence (AI) systems ...
research
03/02/2023

Helpful, Misleading or Confusing: How Humans Perceive Fundamental Building Blocks of Artificial Intelligence Explanations

Explainable artificial intelligence techniques are evolving at breakneck...
research
04/21/2023

Semantics, Ontology and Explanation

The terms 'semantics' and 'ontology' are increasingly appearing together...
research
12/07/2017

Network Analysis for Explanation

Safety critical systems strongly require the quality aspects of artifici...
research
08/23/2021

Knowledge-based XAI through CBR: There is more to explanations than models can tell

The underlying hypothesis of knowledge-based explainable artificial inte...
research
03/17/2023

Hybrid Classic-Quantum Computing for Staging of Invasive Ductal Carcinoma of Breast

Despite the great current relevance of Artificial Intelligence, and the ...

Please sign up or login with your details

Forgot password? Click here to reset