Assessing Algorithmic Fairness with Unobserved Protected Class Using Data Combination

06/01/2019
by   Nathan Kallus, et al.
0

The increasing impact of algorithmic decisions on people's lives compels us to scrutinize their fairness and, in particular, the disparate impacts that ostensibly-color-blind algorithms can have on different groups. Examples include credit decisioning, hiring, advertising, criminal justice, personalized medicine, and targeted policymaking, where in some cases legislative or regulatory frameworks for fairness exist and define specific protected classes. In this paper we study a fundamental challenge to assessing disparate impacts in practice: protected class membership is often not observed in the data. This is particularly a problem in lending and healthcare. We consider the use of an auxiliary dataset, such as the US census, that includes class labels but not decisions or outcomes. We show that a variety of common disparity measures are generally unidentifiable aside for some unrealistic cases, providing a new perspective on the documented biases of popular proxy-based methods. We provide exact characterizations of the sharpest-possible partial identification set of disparities either under no assumptions or when we incorporate mild smoothness constraints. We further provide optimization-based algorithms for computing and visualizing these sets, which enables reliable and robust assessments -- an important tool when disparity assessment can have far-reaching policy implications. We demonstrate this in two case studies with real data: mortgage lending and personalized medicine dosing.

READ FULL TEXT
research
11/27/2018

Fairness Under Unawareness: Assessing Disparity When Protected Class Is Unobserved

Assessing the fairness of a decision making system with respect to a pro...
research
06/28/2018

Proxy Fairness

We consider the problem of improving fairness when one lacks access to a...
research
07/15/2021

Auditing for Diversity using Representative Examples

Assessing the diversity of a dataset of information associated with peop...
research
05/20/2022

The Fairness of Credit Scoring Models

In credit markets, screening algorithms aim to discriminate between good...
research
09/21/2021

Identifying biases in legal data: An algorithmic fairness perspective

The need to address representation biases and sentencing disparities in ...
research
07/09/2020

Transparency Tools for Fairness in AI (Luskin)

We propose new tools for policy-makers to use when assessing and correct...
research
11/19/2017

Does mitigating ML's disparate impact require disparate treatment?

Following related work in law and policy, two notions of prejudice have ...

Please sign up or login with your details

Forgot password? Click here to reset