LIBRE: Learning Interpretable Boolean Rule Ensembles

11/15/2019
by   Graziano Mita, et al.
0

We present a novel method - LIBRE - to learn an interpretable classifier, which materializes as a set of Boolean rules. LIBRE uses an ensemble of bottom-up weak learners operating on a random subset of features, which allows for the learning of rules that generalize well on unseen data even in imbalanced settings. Weak learners are combined with a simple union so that the final ensemble is also interpretable. Experimental results indicate that LIBRE efficiently strikes the right balance between prediction accuracy, which is competitive with black box methods, and interpretability, which is often superior to alternative methods from the literature.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/20/2023

Learning Locally Interpretable Rule Ensemble

This paper proposes a new framework for learning a rule ensemble model t...
research
04/11/2022

Bayes Point Rule Set Learning

Interpretability is having an increasingly important role in the design ...
research
07/11/2019

Fitting Prediction Rule Ensembles to Psychological Research Data: An Introduction and Tutorial

Prediction rule ensembles (PREs) are a relatively new statistical learni...
research
08/30/2011

Dimension Reduction Using Rule Ensemble Machine Learning Methods: A Numerical Study of Three Ensemble Methods

Ensemble methods for supervised machine learning have become popular due...
research
09/13/2019

A Double Penalty Model for Interpretability

Modern statistical learning techniques have often emphasized prediction ...
research
06/05/2019

Generalized Linear Rule Models

This paper considers generalized linear models using rule-based features...
research
05/26/2022

Classification ensembles for multivariate functional data with application to mouse movements in web surveys

We propose new ensemble models for multivariate functional data classifi...

Please sign up or login with your details

Forgot password? Click here to reset