Scientific Explanation and Natural Language: A Unified Epistemological-Linguistic Perspective for Explainable AI

05/03/2022
by   Marco Valentino, et al.
12

A fundamental research goal for Explainable AI (XAI) is to build models that are capable of reasoning through the generation of natural language explanations. However, the methodologies to design and evaluate explanation-based inference models are still poorly informed by theoretical accounts on the nature of explanation. As an attempt to provide an epistemologically grounded characterisation for XAI, this paper focuses on the scientific domain, aiming to bridge the gap between theory and practice on the notion of a scientific explanation. Specifically, the paper combines a detailed survey of the modern accounts of scientific explanation in Philosophy of Science with a systematic analysis of corpora of natural language explanations, clarifying the nature and function of explanatory arguments from both a top-down (categorical) and a bottom-up (corpus-based) perspective. Through a mixture of quantitative and qualitative methodologies, the presented study allows deriving the following main conclusions: (1) Explanations cannot be entirely characterised in terms of inductive or deductive arguments as their main function is to perform unification; (2) An explanation must cite causes and mechanisms that are responsible for the occurrence of the event to be explained; (3) While natural language explanations possess an intrinsic causal-mechanistic nature, they are not limited to causes and mechanisms, also accounting for pragmatic elements such as definitions, properties and taxonomic relations; (4) Patterns of unification naturally emerge in corpora of explanations even if not intentionally modelled; (5) Unification is realised through a process of abstraction, whose function is to provide the inference substrate for subsuming the event to be explained under recurring patterns and high-level regularities.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/05/2021

Do Natural Language Explanations Represent Valid Logical Arguments? Verifying Entailment in Explainable NLI Gold Standards

An emerging line of research in Explainable NLP is the creation of datas...
research
11/01/2019

What Gets Echoed? Understanding the "Pointers" in Explanations of Persuasive Arguments

Explanations are central to everyday life, and are a topic of growing in...
research
04/14/2021

What Makes a Scientific Paper be Accepted for Publication?

Despite peer-reviewing being an essential component of academia since th...
research
05/12/2022

e-CARE: a New Dataset for Exploring Explainable Causal Reasoning

Understanding causality has vital importance for various Natural Languag...
research
09/30/2020

Explaining AI as an Exploratory Process: The Peircean Abduction Model

Current discussions of "Explainable AI" (XAI) do not much consider the r...
research
12/16/2020

LIREx: Augmenting Language Inference with Relevant Explanation

Natural language explanations (NLEs) are a special form of data annotati...
research
05/07/2021

∂-Explainer: Abductive Natural Language Inference via Differentiable Convex Optimization

Constrained optimization solvers with Integer Linear programming (ILP) h...

Please sign up or login with your details

Forgot password? Click here to reset