Cases for Explainable Software Systems:Characteristics and Examples

08/12/2021
by   Mersedeh Sadeghi, et al.
0

The need for systems to explain behavior to users has become more evident with the rise of complex technology like machine learning or self-adaptation. In general, the need for an explanation arises when the behavior of a system does not match the user's expectations. However, there may be several reasons for a mismatch including errors, goal conflicts, or multi-agent interference. Given the various situations, we need precise and agreed descriptions of explanation needs as well as benchmarks to align research on explainable systems. In this paper, we present a taxonomy that structures needs for an explanation according to different reasons. We focus on explanations to improve the user interaction with the system. For each leaf node in the taxonomy, we provide a scenario that describes a concrete situation in which a software system should provide an explanation. These scenarios, called explanation cases, illustrate the different demands for explanations. Our taxonomy can guide the requirements elicitation for explanation capabilities of interactive intelligent systems and our explanation cases build the basis for a common benchmark. We are convinced that both, the taxonomy and the explanation cases, help the community to align future research on explainable systems.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset