The Emerging Landscape of Explainable AI Planning and Decision Making

02/26/2020
by   Tathagata Chakraborti, et al.
47

In this paper, we provide a comprehensive outline of the different threads of work in Explainable AI Planning (XAIP) that has emerged as a focus area in the last couple of years and contrast that with earlier efforts in the field in terms of techniques, target users, and delivery mechanisms. We hope that the survey will provide guidance to new researchers in automated planning towards the role of explanations in the effective design of human-in-the-loop systems, as well as provide the established researcher with some perspective on the evolution of the exciting world of explainable planning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/11/2022

Subgoal-Based Explanations for Unreliable Intelligent Decision Support Systems

Intelligent decision support (IDS) systems leverage artificial intellige...
research
11/17/2020

On the Relationship Between KR Approaches for Explainable Planning

In this paper, we build upon notions from knowledge representation and r...
research
08/14/2019

Towards Explainable AI Planning as a Service

Explainable AI is an important area of research within which Explainable...
research
10/31/2021

JEDAI Explains Decision-Making AI

This paper presents JEDAI, an AI system designed for outreach and educat...
research
10/02/2017

What Does Explainable AI Really Mean? A New Conceptualization of Perspectives

We characterize three notions of explainable AI that cut across research...
research
03/18/2021

Human-AI Symbiosis: A Survey of Current Approaches

In this paper, we aim at providing a comprehensive outline of the differ...
research
11/21/2020

Explainable Composition of Aggregated Assistants

A new design of an AI assistant that has become increasingly popular is ...

Please sign up or login with your details

Forgot password? Click here to reset