Repairing without Retraining: Avoiding Disparate Impact with Counterfactual Distributions

01/29/2019
by   Hao Wang, et al.
4

When the average performance of a prediction model varies significantly with respect to a sensitive attribute (e.g., race or gender), the performance disparity can be expressed in terms of the probability distributions of input and output variables for each sensitive group. In this paper, we exploit this fact to explain and repair the performance disparity of a fixed classification model over a population of interest. Given a black-box classifier that performs unevenly across sensitive groups, we aim to eliminate the performance gap by perturbing the distribution of input features for the disadvantaged group. We refer to the perturbed distribution as a counterfactual distribution, and characterize its properties for popular fairness criteria (e.g., predictive parity, equal FPR, equal opportunity). We then design a descent algorithm to efficiently learn a counterfactual distribution given the black-box classifier and samples drawn from the underlying population. We use the estimated counterfactual distribution to build a data preprocessor that reduces disparate impact without training a new model. We illustrate these use cases through experiments on real-world datasets, showing that we can repair different kinds of disparate impact without a large drop in accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/16/2021

Counterfactual Graphs for Explainable Classification of Brain Networks

Training graph classifiers able to distinguish between healthy brains an...
research
02/16/2023

Counterfactual Reasoning for Bias Evaluation and Detection in a Fairness under Unawareness setting

Current AI regulations require discarding sensitive features (e.g., gend...
research
09/27/2018

Counterfactual Fairness in Text Classification through Robustness

In this paper, we study counterfactual fairness in text classification, ...
research
09/08/2022

Black-Box Audits for Group Distribution Shifts

When a model informs decisions about people, distribution shifts can cre...
research
02/12/2020

To Split or Not to Split: The Impact of Disparate Treatment in Classification

Disparate treatment occurs when a machine learning model produces differ...
research
01/16/2018

On the Direction of Discrimination: An Information-Theoretic Analysis of Disparate Impact in Machine Learning

In the context of machine learning, disparate impact refers to a form of...
research
05/31/2021

Rawlsian Fair Adaptation of Deep Learning Classifiers

Group-fairness in classification aims for equality of a predictive utili...

Please sign up or login with your details

Forgot password? Click here to reset