Model Learning with Personalized Interpretability Estimation (ML-PIE)

04/13/2021
by   Marco Virgolin, et al.
0

High-stakes applications require AI-generated models to be interpretable. Current algorithms for the synthesis of potentially interpretable models rely on objectives or regularization terms that represent interpretability only coarsely (e.g., model size) and are not designed for a specific user. Yet, interpretability is intrinsically subjective. In this paper, we propose an approach for the synthesis of models that are tailored to the user by enabling the user to steer the model synthesis process according to her or his preferences. We use a bi-objective evolutionary algorithm to synthesize models with trade-offs between accuracy and a user-specific notion of interpretability. The latter is estimated by a neural network that is trained concurrently to the evolution using the feedback of the user, which is collected using uncertainty-based active learning. To maximize usability, the user is only asked to tell, given two models at the time, which one is less complex. With experiments on two real-world datasets involving 61 participants, we find that our approach is capable of learning estimations of interpretability that can be very different for different users. Moreover, the users tend to prefer models found using the proposed approach over models found using non-personalized interpretability indices.

READ FULL TEXT

page 1

page 6

page 8

research
04/23/2020

Learning a Formula of Interpretability to Learn Interpretable Formulas

Many risk-sensitive applications require Machine Learning (ML) models to...
research
10/21/2019

Making Bayesian Predictive Models Interpretable: A Decision Theoretic Approach

A salient approach to interpretable machine learning is to restrict mode...
research
04/25/2017

A relevance-scalability-interpretability tradeoff with temporally evolving user personas

The current work characterizes the users of a VoD streaming space throug...
research
06/02/2020

DeepCoDA: personalized interpretability for compositional health

Interpretability allows the domain-expert to directly evaluate the model...
research
10/26/2020

Enforcing Interpretability and its Statistical Impacts: Trade-offs between Accuracy and Interpretability

To date, there has been no formal study of the statistical cost of inter...
research
11/18/2016

Learning Interpretability for Visualizations using Adapted Cox Models through a User Experiment

In order to be useful, visualizations need to be interpretable. This pap...
research
05/24/2023

Prompt Evolution for Generative AI: A Classifier-Guided Approach

Synthesis of digital artifacts conditioned on user prompts has become an...

Please sign up or login with your details

Forgot password? Click here to reset