Multicriteria interpretability driven Deep Learning

11/28/2021
by   Marco Repetto, et al.
0

Deep Learning methods are renowned for their performances, yet their lack of interpretability prevents them from high-stakes contexts. Recent model agnostic methods address this problem by providing post-hoc interpretability methods by reverse-engineering the model's inner workings. However, in many regulated fields, interpretability should be kept in mind from the start, which means that post-hoc methods are valid only as a sanity check after model training. Interpretability from the start, in an abstract setting, means posing a set of soft constraints on the model's behavior by injecting knowledge and annihilating possible biases. We propose a Multicriteria technique that allows to control the feature effects on the model's outcome by injecting knowledge in the objective function. We then extend the technique by including a non-linear knowledge function to account for more complex effects and local lack of knowledge. The result is a Deep Learning model that embodies interpretability from the start and aligns with the recent regulations. A practical empirical example based on credit risk, suggests that our approach creates performant yet robust models capable of overcoming biases derived from data scarcity.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/25/2019

Proceedings of AAAI 2019 Workshop on Network Interpretability for Deep Learning

This is the Proceedings of AAAI 2019 Workshop on Network Interpretabilit...
research
03/02/2023

Feature Perturbation Augmentation for Reliable Evaluation of Importance Estimators

Post-hoc explanation methods attempt to make the inner workings of deep ...
research
02/22/2020

The Pragmatic Turn in Explainable Artificial Intelligence (XAI)

In this paper I argue that the search for explainable models and interpr...
research
11/16/2021

SMACE: A New Method for the Interpretability of Composite Decision Systems

Interpretability is a pressing issue for decision systems. Many post hoc...
research
05/13/2020

Towards Interpretable Deep Learning Models for Knowledge Tracing

As an important technique for modeling the knowledge states of learners,...
research
11/19/2018

How far from automatically interpreting deep learning

In recent years, deep learning researchers have focused on how to find t...
research
01/05/2021

Weight-of-evidence 2.0 with shrinkage and spline-binning

In many practical applications, such as fraud detection, credit risk mod...

Please sign up or login with your details

Forgot password? Click here to reset