LEx: A Framework for Operationalising Layers of Machine Learning Explanations
Several social factors impact how people respond to AI explanations used to justify AI decisions affecting them personally. In this position paper, we define a framework called the layers of explanation (LEx), a lens through which we can assess the appropriateness of different types of explanations. The framework uses the notions of sensitivity (emotional responsiveness) of features and the level of stakes (decision's consequence) in a domain to determine whether different types of explanations are appropriate in a given context. We demonstrate how to use the framework to assess the appropriateness of different types of explanations in different domains.
READ FULL TEXT