Counterfactual Instances Explain Little

09/20/2021
by   Adam White, et al.
4

In many applications, it is important to be able to explain the decisions of machine learning systems. An increasingly popular approach has been to seek to provide counterfactual instance explanations. These specify close possible worlds in which, contrary to the facts, a person receives their desired decision from the machine learning system. This paper will draw on literature from the philosophy of science to argue that a satisfactory explanation must consist of both counterfactual instances and a causal equation (or system of equations) that support the counterfactual instances. We will show that counterfactual instances by themselves explain little. We will further illustrate how explainable AI methods that provide both causal equations and counterfactual instances can successfully explain machine learning predictions.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset