Contrastive Fairness in Machine Learning

05/17/2019
by   Tapabrata Chakraborti, et al.
0

We present contrastive fairness, a new direction in causal inference applied to algorithmic fairness. Earlier methods dealt with the "what if?" question (counterfactual fairness, NeurIPS'17). We establish the theoretical and mathematical implications of the contrastive question "why this and not that?" in context of algorithmic fairness in machine learning. This is essential to defend the fairness of algorithmic decisions in tasks where a person or sub-group of people is chosen over another (job recruitment, university admission, company layovers, etc). This development is also helpful to institutions to ensure or defend the fairness of their automated decision making processes. A test case of employee job location allocation is provided as an illustrative example.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/10/2021

Who Gets What, According to Whom? An Analysis of Fairness Perceptions in Service Allocation

Algorithmic fairness research has traditionally been linked to the disci...
research
08/04/2021

Under the Radar – Auditing Fairness in ML for Humanitarian Mapping

Humanitarian mapping from space with machine learning helps policy-maker...
research
08/04/2021

Fairness in Algorithmic Profiling: A German Case Study

Algorithmic profiling is increasingly used in the public sector as a mea...
research
11/08/2019

A Human-in-the-loop Framework to Construct Context-dependent Mathematical Formulations of Fairness

Despite the recent surge of interest in designing and guaranteeing mathe...
research
04/14/2023

Systemic Fairness

Machine learning algorithms are increasingly used to make or support dec...
research
02/03/2022

Algorithmic Fairness Datasets: the Story so Far

Data-driven algorithms are being studied and deployed in diverse domains...

Please sign up or login with your details

Forgot password? Click here to reset