Are We On The Same Page? Hierarchical Explanation Generation for Planning Tasks in Human-Robot Teaming using Reinforcement Learning

12/22/2020
by   Mehrdad Zakershahrak, et al.
0

Providing explanations is considered an imperative ability for an AI agent in a human-robot teaming framework. The right explanation provides the rationale behind an AI agent's decision making. However, to maintain the human teammate's cognitive demand to comprehend the provided explanations, prior works have focused on providing explanations in a specific order or intertwining the explanation generation with plan execution. These approaches, however, do not consider the degree of details they share throughout the provided explanations. In this work, we argue that the explanations, especially the complex ones, should be abstracted to be aligned with the level of details the teammate desires to maintain the cognitive load of the recipient. The challenge here is to learn a hierarchical model of explanations and details the agent requires to yield the explanations as an objective. Moreover, the agent needs to follow a high-level plan in a task domain such that the agent can transfer learned teammate preferences to a scenario where lower-level control policies are different, while the high-level plan remains the same. Results confirmed our hypothesis that the process of understanding an explanation was a dynamic hierarchical process. The human preference that reflected this aspect corresponded exactly to creating and employing abstraction for knowledge assimilation hidden deeper in our cognitive process. We showed that hierarchical explanations achieved better task performance and behavior interpretability while reduced cognitive load. These results shed light on designing explainable agents utilizing reinforcement learning and planning across various domains.

READ FULL TEXT
research
03/15/2019

Online Explanation Generation for Human-Robot Teaming

As Artificial Intelligence (AI) becomes an integral part of our life, th...
research
08/05/2022

On Model Reconciliation: How to Reconcile When Robot Does not Know Human's Model?

The Model Reconciliation Problem (MRP) was introduced to address issues ...
research
08/01/2017

Balancing Explicability and Explanation in Human-Aware Planning

Human aware planning requires an agent to be aware of the intentions, ca...
research
04/16/2020

Order Matters: Generating Progressive Explanations for Planning Tasks in Human-Robot Teaming

Prior work on generating explanations has been focused on providing the ...
research
02/03/2018

Plan Explanations as Model Reconciliation -- An Empirical Study

Recent work in explanation generation for decision making agents has loo...
research
04/25/2023

A Closer Look at Reward Decomposition for High-Level Robotic Explanations

Explaining the behavior of intelligent agents such as robots to humans i...
research
11/16/2016

Explicablility as Minimizing Distance from Expected Behavior

In order to have effective human AI collaboration, it is not simply enou...

Please sign up or login with your details

Forgot password? Click here to reset