LUCID: Exposing Algorithmic Bias through Inverse Design

08/26/2022
by   Carmen Mazijn, et al.
22

AI systems can create, propagate, support, and automate bias in decision-making processes. To mitigate biased decisions, we both need to understand the origin of the bias and define what it means for an algorithm to make fair decisions. Most group fairness notions assess a model's equality of outcome by computing statistical metrics on the outputs. We argue that these output metrics encounter intrinsic obstacles and present a complementary approach that aligns with the increasing focus on equality of treatment. By Locating Unfairness through Canonical Inverse Design (LUCID), we generate a canonical set that shows the desired inputs for a model given a preferred output. The canonical set reveals the model's internal logic and exposes potential unethical biases by repeatedly interrogating the decision-making process. We evaluate LUCID on the UCI Adult and COMPAS data sets and find that some biases detected by a canonical set differ from those of output metrics. The results show that by shifting the focus towards equality of treatment and looking into the algorithm's internal workings, the canonical sets are a valuable addition to the toolbox of algorithmic fairness evaluation.

READ FULL TEXT

page 2

page 6

research
07/28/2023

LUCID-GAN: Conditional Generative Models to Locate Unfairness

Most group fairness notions detect unethical biases by computing statist...
research
09/24/2018

Evaluating Fairness Metrics in the Presence of Dataset Bias

Data-driven algorithms play a large role in decision making across a var...
research
05/03/2021

Algorithms are not neutral: Bias in collaborative filtering

Discussions of algorithmic bias tend to focus on examples where either t...
research
02/05/2020

Joint Optimization of AI Fairness and Utility: A Human-Centered Approach

Today, AI is increasingly being used in many high-stakes decision-making...
research
11/21/2022

Equality of Effort via Algorithmic Recourse

This paper proposes a method for measuring fairness through equality of ...

Please sign up or login with your details

Forgot password? Click here to reset