An Empirical Study on Fairness Improvement with Multiple Protected Attributes

07/25/2023
by   Zhenpeng Chen, et al.
0

Existing research mostly improves the fairness of Machine Learning (ML) software regarding a single protected attribute at a time, but this is unrealistic given that many users have multiple protected attributes. This paper conducts an extensive study of fairness improvement regarding multiple protected attributes, covering 11 state-of-the-art fairness improvement methods. We analyze the effectiveness of these methods with different datasets, metrics, and ML models when considering multiple protected attributes. The results reveal that improving fairness for a single protected attribute can largely decrease fairness regarding unconsidered protected attributes. This decrease is observed in up to 88.3 surprisingly, we find little difference in accuracy loss when considering single and multiple protected attributes, indicating that accuracy can be maintained in the multiple-attribute paradigm. However, the effect on precision and recall when handling multiple protected attributes is about 5 times and 8 times that of a single attribute. This has important implications for future fairness research: reporting only accuracy as the ML performance metric, which is currently common in the literature, is inadequate.

READ FULL TEXT
research
07/28/2022

Multiple Attribute Fairness: Application to Fraud Detection

We propose a fairness measure relaxing the equality conditions in the po...
research
05/19/2022

What Is Fairness? Implications For FairML

A growing body of literature in fairness-aware ML (fairML) aspires to mi...
research
05/29/2023

Generalized Disparate Impact for Configurable Fairness Solutions in ML

We make two contributions in the field of AI fairness over continuous pr...
research
06/30/2023

Augmenting Holistic Review in University Admission using Natural Language Processing for Essays and Recommendation Letters

University admission at many highly selective institutions uses a holist...
research
08/26/2023

Muffin: A Framework Toward Multi-Dimension AI Fairness by Uniting Off-the-Shelf Models

Model fairness (a.k.a., bias) has become one of the most critical proble...
research
01/27/2022

Fairness implications of encoding protected categorical attributes

Protected attributes are often presented as categorical features that ne...
research
04/03/2020

FairALM: Augmented Lagrangian Method for Training Fair Models with Little Regret

Algorithmic decision making based on computer vision and machine learnin...

Please sign up or login with your details

Forgot password? Click here to reset