Counterfactual Explanations for Oblique Decision Trees: Exact, Efficient Algorithms

We consider counterfactual explanations, the problem of minimally adjusting features in a source input instance so that it is classified as a target class under a given classifier. This has become a topic of recent interest as a way to query a trained model and suggest possible actions to overturn its decision. Mathematically, the problem is formally equivalent to that of finding adversarial examples, which also has attracted significant attention recently. Most work on either counterfactual explanations or adversarial examples has focused on differentiable classifiers, such as neural nets. We focus on classification trees, both axis-aligned and oblique (having hyperplane splits). Although here the counterfactual optimization problem is nonconvex and nondifferentiable, we show that an exact solution can be computed very efficiently, even with high-dimensional feature vectors and with both continuous and categorical features, and demonstrate it in different datasets and settings. The results are particularly relevant for finance, medicine or legal applications, where interpretability and counterfactual explanations are particularly important.


page 4

page 13


Counterfactual Explanations Adversarial Examples – Common Grounds, Essential Differences, and Potential Transfers

It is well known that adversarial examples and counterfactual explanatio...

Semantics and explanation: why counterfactual explanations produce adversarial examples in deep neural networks

Recent papers in explainable AI have made a compelling case for counterf...

On the Connections between Counterfactual Explanations and Adversarial Examples

Counterfactual explanations and adversarial examples have emerged as cri...

Very fast, approximate counterfactual explanations for decision forests

We consider finding a counterfactual explanation for a classification or...

Inverse classification with logistic and softmax classifiers: efficient optimization

In recent years, a certain type of problems have become of interest wher...

Counterfactual Explanations for Neural Recommenders

Understanding why specific items are recommended to users can significan...

Anytime Generation of Counterfactual Explanations for Text Classification

In many machine learning applications, it is important for the user to u...

Please sign up or login with your details

Forgot password? Click here to reset