Proportional Multicalibration

09/29/2022
by   William La Cava, et al.
0

Multicalibration is a desirable fairness criteria that constrains calibration error among flexibly-defined groups in the data while maintaining overall calibration. However, when outcome probabilities are correlated with group membership, multicalibrated models can exhibit a higher percent calibration error among groups with lower base rates than groups with higher base rates. As a result, it remains possible for a decision-maker to learn to trust or distrust model predictions for specific groups. To alleviate this, we propose proportional multicalibration, a criteria that constrains the percent calibration error among groups and within prediction bins. We prove that satisfying proportional multicalibration bounds a model's multicalibration as well its differential calibration, a stronger fairness criteria inspired by the fairness notion of sufficiency. We provide an efficient algorithm for post-processing risk prediction models for proportional multicalibration and evaluate it empirically. We conduct simulation studies and investigate a real-world application of PMC-postprocessing to prediction of emergency department patient admissions. We observe that proportional multicalibration is a promising criteria for controlling simultenous measures of calibration fairness of a model over intersectional groups with virtually no cost in terms of classification performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/11/2022

Is calibration a fairness requirement? An argument from the point of view of moral philosophy and decision theory

In this paper, we provide a moral analysis of two criteria of statistica...
research
09/06/2017

On Fairness and Calibration

The machine learning community has become increasingly concerned with th...
research
08/29/2018

Group calibration is a byproduct of unconstrained learning

Much recent work on fairness in machine learning has focused on how well...
research
03/16/2021

Predicting Early Dropout: Calibration and Algorithmic Fairness Considerations

In this work, the problem of predicting dropout risk in undergraduate st...
research
06/26/2019

Fairness criteria through the lens of directed acyclic graphical models

A substantial portion of the literature on fairness in algorithms propos...
research
06/15/2020

xOrder: A Model Agnostic Post-Processing Framework for Achieving Ranking Fairness While Maintaining Algorithm Utility

Algorithmic fairness has received lots of interests in machine learning ...
research
03/08/2023

HappyMap: A Generalized Multi-calibration Method

Multi-calibration is a powerful and evolving concept originating in the ...

Please sign up or login with your details

Forgot password? Click here to reset