Mechanistic Interpretation of Machine Learning Inference: A Fuzzy Feature Importance Fusion Approach

by   Divish Rengasamy, et al.

With the widespread use of machine learning to support decision-making, it is increasingly important to verify and understand the reasons why a particular output is produced. Although post-training feature importance approaches assist this interpretation, there is an overall lack of consensus regarding how feature importance should be quantified, making explanations of model predictions unreliable. In addition, many of these explanations depend on the specific machine learning approach employed and on the subset of data used when calculating feature importance. A possible solution to improve the reliability of explanations is to combine results from multiple feature importance quantifiers from different machine learning approaches coupled with re-sampling. Current state-of-the-art ensemble feature importance fusion uses crisp techniques to fuse results from different approaches. There is, however, significant loss of information as these approaches are not context-aware and reduce several quantifiers to a single crisp output. More importantly, their representation of 'importance' as coefficients is misleading and incomprehensible to end-users and decision makers. Here we show how the use of fuzzy data fusion methods can overcome some of the important limitations of crisp fusion methods.


page 1

page 4

page 5

page 7

page 8


Towards a More Reliable Interpretation of Machine Learning Outputs for Safety-Critical Systems using Feature Importance Fusion

When machine learning supports decision-making in safety-critical system...

EFI: A Toolbox for Feature Importance Fusion and Interpretation in Python

This paper presents an open-source Python toolbox called Ensemble Featur...

Can Users Correctly Interpret Machine Learning Explanations and Simultaneously Identify Their Limitations?

Automated decision-making systems are becoming increasingly ubiquitous, ...

Partial order: Finding Consensus among Uncertain Feature Attributions

Post-hoc feature importance is progressively being employed to explain d...

Model interpretation using improved local regression with variable importance

A fundamental question on the use of ML models concerns the explanation ...

Comparing Feature Importance and Rule Extraction for Interpretability on Text Data

Complex machine learning algorithms are used more and more often in crit...

Predicting Illness for a Sustainable Dairy Agriculture: Predicting and Explaining the Onset of Mastitis in Dairy Cows

Mastitis is a billion dollar health problem for the modern dairy industr...

Please sign up or login with your details

Forgot password? Click here to reset