Minimum-Risk Recalibration of Classifiers

05/18/2023
by   Zeyu Sun, et al.
0

Recalibrating probabilistic classifiers is vital for enhancing the reliability and accuracy of predictive models. Despite the development of numerous recalibration algorithms, there is still a lack of a comprehensive theory that integrates calibration and sharpness (which is essential for maintaining predictive power). In this paper, we introduce the concept of minimum-risk recalibration within the framework of mean-squared-error (MSE) decomposition, offering a principled approach for evaluating and recalibrating probabilistic classifiers. Using this framework, we analyze the uniform-mass binning (UMB) recalibration method and establish a finite-sample risk upper bound of order Õ(B/n + 1/B^2) where B is the number of bins and n is the sample size. By balancing calibration and sharpness, we further determine that the optimal number of bins for UMB scales with n^1/3, resulting in a risk bound of approximately O(n^-2/3). Additionally, we tackle the challenge of label shift by proposing a two-stage approach that adjusts the recalibration function using limited labeled data from the target domain. Our results show that transferring a calibrated classifier requires significantly fewer target samples compared to recalibrating from scratch. We validate our theoretical findings through numerical simulations, which confirm the tightness of the proposed bounds, the optimal number of bins, and the effectiveness of label shift adaptation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/31/2018

Optimal Calibration for Computer Model Prediction with Finite Samples

This paper considers the computer model prediction in a non-asymptotic f...
research
03/22/2019

Regularized Learning for Domain Adaptation under Label Shifts

We propose Regularized Learning under Label shifts (RLLS), a principled ...
research
07/11/2016

Minimum Description Length Principle in Supervised Learning with Application to Lasso

The minimum description length (MDL) principle in supervised learning is...
research
07/16/2020

Transferable Calibration with Lower Bias and Variance in Domain Adaptation

Domain Adaptation (DA) enables transferring a learning machine from a la...
research
09/10/2018

Approximation and Estimation for High-Dimensional Deep Learning Networks

It has been experimentally observed in recent years that multi-layer art...
research
03/17/2020

A Unified View of Label Shift Estimation

Label shift describes the setting where although the label distribution ...

Please sign up or login with your details

Forgot password? Click here to reset