Explainable Planning

09/29/2017
by   Maria Fox, et al.
0

As AI is increasingly being adopted into application solutions, the challenge of supporting interaction with humans is becoming more apparent. Partly this is to support integrated working styles, in which humans and intelligent systems cooperate in problem-solving, but also it is a necessary step in the process of building trust as humans migrate greater responsibility to such systems. The challenge is to find effective ways to communicate the foundations of AI-driven behaviour, when the algorithms that drive it are far from transparent to humans. In this paper we consider the opportunities that arise in AI planning, exploiting the model-based representations that form a familiar and common basis for communication with users, while acknowledging the gap between planning algorithms and human problem-solving.

READ FULL TEXT
research
05/12/2020

Argument Schemes for Explainable Planning

Artificial Intelligence (AI) is being increasingly used to develop syste...
research
08/06/2019

Experiential AI

Experiential AI is proposed as a new research agenda in which artists an...
research
10/15/2019

Challenges of Human-Aware AI Systems

From its inception, AI has had a rather ambivalent relationship to human...
research
11/29/2022

Holding AI to Account: Challenges for the Delivery of Trustworthy AI in Healthcare

The need for AI systems to provide explanations for their behaviour is n...
research
11/25/2021

Meaningful human control over AI systems: beyond talking the talk

The concept of meaningful human control has been proposed to address res...
research
12/08/2019

AI, how can humans communicate better with you?

Artificial intelligence(AI) systems and humans communicate more and more...

Please sign up or login with your details

Forgot password? Click here to reset