Explanation from Specification

12/13/2020
by   Harish Naik, et al.
0

Explainable components in XAI algorithms often come from a familiar set of models, such as linear models or decision trees. We formulate an approach where the type of explanation produced is guided by a specification. Specifications are elicited from the user, possibly using interaction with the user and contributions from other areas. Areas where a specification could be obtained include forensic, medical, and scientific applications. Providing a menu of possible types of specifications in an area is an exploratory knowledge representation and reasoning task for the algorithm designer, aiming at understanding the possibilities and limitations of efficiently computable modes of explanations. Two examples are discussed: explanations for Bayesian networks using the theory of argumentation, and explanations for graph neural networks. The latter case illustrates the possibility of having a representation formalism available to the user for specifying the type of explanation requested, for example, a chemical query language for classifying molecules. The approach is motivated by a theory of explanation in the philosophy of science, and it is related to current questions in the philosophy of science on the role of machine learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/13/2012

Explanation Trees for Causal Bayesian Networks

Bayesian networks can be used to extract explanations about the observed...
research
06/02/2021

Towards an Explanation Space to Align Humans and Explainable-AI Teamwork

Providing meaningful and actionable explanations to end-users is a funda...
research
03/03/2021

Extracting Optimal Explanations for Ensemble Trees via Logical Reasoning

Ensemble trees are a popular machine learning model which often yields h...
research
09/06/2022

Explaining Machine Learning Models in Natural Conversations: Towards a Conversational XAI Agent

The goal of Explainable AI (XAI) is to design methods to provide insight...
research
07/10/2014

Possibilities of technologization of philosophical knowledge

Article purpose is the analysis of a question of possibility of technolo...
research
05/25/2023

Quantifying the Intrinsic Usefulness of Attributional Explanations for Graph Neural Networks with Artificial Simulatability Studies

Despite the increasing relevance of explainable AI, assessing the qualit...
research
11/20/2019

Red Dragon AI at TextGraphs 2019 Shared Task: Language Model Assisted Explanation Generation

The TextGraphs-13 Shared Task on Explanation Regeneration asked particip...

Please sign up or login with your details

Forgot password? Click here to reset