Interpretable Predictions of Tree-based Ensembles via Actionable Feature Tweaking

06/20/2017
by   Gabriele Tolomei, et al.
0

Machine-learned models are often described as "black boxes". In many real-world applications however, models may have to sacrifice predictive power in favour of human-interpretability. When this is the case, feature engineering becomes a crucial task, which requires significant and time-consuming human effort. Whilst some features are inherently static, representing properties that cannot be influenced (e.g., the age of an individual), others capture characteristics that could be adjusted (e.g., the daily amount of carbohydrates taken). Nonetheless, once a model is learned from the data, each prediction it makes on new instances is irreversible - assuming every instance to be a static point located in the chosen feature space. There are many circumstances however where it is important to understand (i) why a model outputs a certain prediction on a given instance, (ii) which adjustable features of that instance should be modified, and finally (iii) how to alter such a prediction when the mutated instance is input back to the model. In this paper, we present a technique that exploits the internals of a tree-based ensemble classifier to offer recommendations for transforming true negative instances into positively predicted ones. We demonstrate the validity of our approach using an online advertising application. First, we design a Random Forest classifier that effectively separates between two types of ads: low (negative) and high (positive) quality ads (instances). Then, we introduce an algorithm that provides recommendations that aim to transform a low quality ad (negative instance) into a high quality one (positive instance). Finally, we evaluate our approach on a subset of the active inventory of a large ad network, Yahoo Gemini.

READ FULL TEXT
research
11/27/2019

Single Sample Feature Importance: An Interpretable Algorithm for Low-Level Feature Analysis

Have you ever wondered how your feature space is impacting the predictio...
research
01/23/2023

RF+clust for Leave-One-Problem-Out Performance Prediction

Per-instance automated algorithm configuration and selection are gaining...
research
09/25/2015

Evasion and Hardening of Tree Ensemble Classifiers

Classifier evasion consists in finding for a given instance x the neares...
research
02/04/2022

Exploring the Feature Space of TSP Instances Using Quality Diversity

Generating instances of different properties is key to algorithm selecti...
research
10/02/2019

Learning Maximally Predictive Prototypes in Multiple Instance Learning

In this work, we propose a simple model that provides permutation invari...
research
11/18/2018

Understanding Learned Models by Identifying Important Features at the Right Resolution

In many application domains, it is important to characterize how complex...
research
03/02/2018

A multi-instance deep neural network classifier: application to Higgs boson CP measurement

We investigate properties of a classifier applied to the measurements of...

Please sign up or login with your details

Forgot password? Click here to reset