Model-based metrics: Sample-efficient estimates of predictive model subpopulation performance

04/25/2021
by   Andrew C. Miller, et al.
0

Machine learning models - now commonly developed to screen, diagnose, or predict health conditions - are evaluated with a variety of performance metrics. An important first step in assessing the practical utility of a model is to evaluate its average performance over an entire population of interest. In many settings, it is also critical that the model makes good predictions within predefined subpopulations. For instance, showing that a model is fair or equitable requires evaluating the model's performance in different demographic subgroups. However, subpopulation performance metrics are typically computed using only data from that subgroup, resulting in higher variance estimates for smaller groups. We devise a procedure to measure subpopulation performance that can be more sample-efficient than the typical subsample estimates. We propose using an evaluation model - a model that describes the conditional distribution of the predictive model score - to form model-based metric (MBM) estimates. Our procedure incorporates model checking and validation, and we propose a computationally efficient approximation of the traditional nonparametric bootstrap to form confidence intervals. We evaluate MBMs on two main tasks: a semi-synthetic setting where ground truth metrics are available and a real-world hospital readmission prediction task. We find that MBMs consistently produce more accurate and lower variance estimates of model performance for small subpopulations.

READ FULL TEXT

page 14

page 24

research
07/01/2023

Bootstrapping the Cross-Validation Estimate

Cross-validation is a widely used technique for evaluating the performan...
research
09/15/2022

Avoiding Biased Clinical Machine Learning Model Performance Estimates in the Presence of Label Selection

When evaluating the performance of clinical machine learning models, one...
research
02/10/2023

The out-of-sample R^2: estimation and inference

Out-of-sample prediction is the acid test of predictive models, yet an i...
research
12/15/2020

Learning Prediction Intervals for Model Performance

Understanding model performance on unlabeled data is a fundamental chall...
research
06/07/2021

How to Evaluate Uncertainty Estimates in Machine Learning for Regression?

As neural networks become more popular, the need for accompanying uncert...
research
09/13/2021

Low-Shot Validation: Active Importance Sampling for Estimating Classifier Performance on Rare Categories

For machine learning models trained with limited labeled training data, ...
research
10/21/2022

Uncertainty Estimates of Predictions via a General Bias-Variance Decomposition

Reliably estimating the uncertainty of a prediction throughout the model...

Please sign up or login with your details

Forgot password? Click here to reset