The Hidden Assumptions Behind Counterfactual Explanations and Principal Reasons

by   Solon Barocas, et al.

Counterfactual explanations are gaining prominence within technical, legal, and business circles as a way to explain the decisions of a machine learning model. These explanations share a trait with the long-established "principal reason" explanations required by U.S. credit laws: they both explain a decision by highlighting a set of features deemed most relevant–and withholding others. These "feature-highlighting explanations" have several desirable properties: They place no constraints on model complexity, do not require model disclosure, detail what needed to be different to achieve a different decision, and seem to automate compliance with the law. But they are far more complex and subjective than they appear. In this paper, we demonstrate that the utility of feature-highlighting explanations relies on a number of easily overlooked assumptions: that the recommended change in feature values clearly maps to real-world actions, that features can be made commensurate by looking only at the distribution of the training data, that features are only relevant to the decision at hand, and that the underlying model is stable over time, monotonic, and limited to binary outcomes. We then explore several consequences of acknowledging and attempting to address these assumptions, including a paradox in the way that feature-highlighting explanations aim to respect autonomy, the unchecked power that feature-highlighting explanations grant decision makers, and a tension between making these explanations useful and the need to keep the model hidden. While new research suggests several ways that feature-highlighting explanations can work around some of the problems that we identify, the disconnect between features in the model and actions in the real world–and the subjective choices necessary to compensate for this–must be understood before these techniques can be usefully implemented.


page 1

page 2

page 3

page 4


Convex Density Constraints for Computing Plausible Counterfactual Explanations

The increasing deployment of machine learning as well as legal regulatio...

"Explain it in the Same Way!" – Model-Agnostic Group Fairness of Counterfactual Explanations

Counterfactual explanations are a popular type of explanation for making...

A Series of Unfortunate Counterfactual Events: the Role of Time in Counterfactual Explanations

Counterfactual explanations are a prominent example of post-hoc interpre...

GeCo: Quality Counterfactual Explanations in Real Time

Machine learning is increasingly applied in high-stakes decision making ...

Feature Attributions and Counterfactual Explanations Can Be Manipulated

As machine learning models are increasingly used in critical decision-ma...

Decomposing Counterfactual Explanations for Consequential Decision Making

The goal of algorithmic recourse is to reverse unfavorable decisions (e....

A Causal Perspective on Meaningful and Robust Algorithmic Recourse

Algorithmic recourse explanations inform stakeholders on how to act to r...

Please sign up or login with your details

Forgot password? Click here to reset