A Unifying Theory of Distance from Calibration

11/30/2022
by   Jarosław Błasiok, et al.
0

We study the fundamental question of how to define and measure the distance from calibration for probabilistic predictors. While the notion of perfect calibration is well-understood, there is no consensus on how to quantify the distance from perfect calibration. Numerous calibration measures have been proposed in the literature, but it is unclear how they compare to each other, and many popular measures such as Expected Calibration Error (ECE) fail to satisfy basic properties like continuity. We present a rigorous framework for analyzing calibration measures, inspired by the literature on property testing. We propose a ground-truth notion of distance from calibration: the ℓ_1 distance to the nearest perfectly calibrated predictor. We define a consistent calibration measure as one that is a polynomial factor approximation to the this distance. Applying our framework, we identify three calibration measures that are consistent and can be estimated efficiently: smooth calibration, interval calibration, and Laplace kernel calibration. The former two give quadratic approximations to the ground truth distance, which we show is information-theoretically optimal. Our work thus establishes fundamental lower and upper bounds on measuring distance to calibration, and also provides theoretical justification for preferring certain metrics (like Laplace kernel calibration) in practice.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/21/2023

Smooth ECE: Principled Reliability Diagrams via Kernel Smoothing

Calibration measures and reliability diagrams are two fundamental tools ...
research
10/28/2022

Stop Measuring Calibration When Humans Disagree

Calibration is a popular framework to evaluate whether a classifier know...
research
05/30/2023

When Does Optimizing a Proper Loss Yield Calibration?

Optimizing proper loss functions is popularly believed to yield predicto...
research
10/24/2019

Calibration tests in multi-class classification: A unifying framework

In safety-critical applications a probabilistic model is usually require...
research
10/05/2022

The Calibration Generalization Gap

Calibration is a fundamental property of a good predictive model: it req...
research
11/04/2021

Scaffolding Sets

Predictors map individual instances in a population to the interval [0,1...
research
06/27/2022

RankSEG: A Consistent Ranking-based Framework for Segmentation

Segmentation has emerged as a fundamental field of computer vision and n...

Please sign up or login with your details

Forgot password? Click here to reset