Intellige: A User-Facing Model Explainer for Narrative Explanations

05/27/2021
by   Jilei Yang, et al.
0

Predictive machine learning models often lack interpretability, resulting in low trust from model end users despite having high predictive performance. While many model interpretation approaches return top important features to help interpret model predictions, these top features may not be well-organized or intuitive to end users, which limits model adoption rates. In this paper, we propose Intellige, a user-facing model explainer that creates user-digestible interpretations and insights reflecting the rationale behind model predictions. Intellige builds an end-to-end pipeline from machine learning platforms to end user platforms, and provides users with an interface for implementing model interpretation approaches and for customizing narrative insights. Intellige is a platform consisting of four components: Model Importer, Model Interpreter, Narrative Generator, and Narrative Exporter. We describe these components, and then demonstrate the effectiveness of Intellige through use cases at LinkedIn. Quantitative performance analyses indicate that Intellige's narrative insights lead to lifts in adoption rates of predictive model recommendations, as well as to increases in downstream key metrics such as revenue when compared to previous approaches, while qualitative analyses indicate positive feedback from end users.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/15/2020

Altruist: Argumentative Explanations through Local Interpretations of Predictive Models

Interpretable machine learning is an emerging field providing solutions ...
research
07/08/2020

Pitfalls to Avoid when Interpreting Machine Learning Models

Modern requirements for machine learning (ML) models include both high p...
research
09/19/2019

AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models

Neural NLP models are increasingly accurate but are imperfect and opaque...
research
11/13/2021

Image Classification with Consistent Supporting Evidence

Adoption of machine learning models in healthcare requires end users' tr...
research
12/04/2020

Learning Interpretable Concept-Based Models with Human Feedback

Machine learning models that first learn a representation of a domain in...
research
10/21/2022

Considerations for Visualizing Uncertainty in Clinical Machine Learning Models

Clinician-facing predictive models are increasingly present in the healt...

Please sign up or login with your details

Forgot password? Click here to reset