M^3Fair: Mitigating Bias in Healthcare Data through Multi-Level and Multi-Sensitive-Attribute Reweighting Method

06/07/2023
by   Yinghao Zhu, et al.
1

In the data-driven artificial intelligence paradigm, models heavily rely on large amounts of training data. However, factors like sampling distribution imbalance can lead to issues of bias and unfairness in healthcare data. Sensitive attributes, such as race, gender, age, and medical condition, are characteristics of individuals that are commonly associated with discrimination or bias. In healthcare AI, these attributes can play a significant role in determining the quality of care that individuals receive. For example, minority groups often receive fewer procedures and poorer-quality medical care than white individuals in US. Therefore, detecting and mitigating bias in data is crucial to enhancing health equity. Bias mitigation methods include pre-processing, in-processing, and post-processing. Among them, Reweighting (RW) is a widely used pre-processing method that performs well in balancing machine learning performance and fairness performance. RW adjusts the weights for samples within each (group, label) combination, where these weights are utilized in loss functions. However, RW is limited to considering only a single sensitive attribute when mitigating bias and assumes that each sensitive attribute is equally important. This may result in potential inaccuracies when addressing intersectional bias. To address these limitations, we propose M3Fair, a multi-level and multi-sensitive-attribute reweighting method by extending the RW method to multiple sensitive attributes at multiple levels. Our experiments on real-world datasets show that the approach is effective, straightforward, and generalizable in addressing the healthcare fairness issues.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/14/2020

Causal Multi-Level Fairness

Algorithmic systems are known to impact marginalized groups severely, an...
research
10/17/2021

Developing a novel fair-loan-predictor through a multi-sensitive debiasing pipeline: DualFair

Machine learning (ML) models are increasingly used for high-stake applic...
research
07/20/2022

Mitigating Algorithmic Bias with Limited Annotations

Existing work on fairness modeling commonly assumes that sensitive attri...
research
08/26/2023

Muffin: A Framework Toward Multi-Dimension AI Fairness by Uniting Off-the-Shelf Models

Model fairness (a.k.a., bias) has become one of the most critical proble...
research
03/30/2022

Robust Reputation Independence in Ranking Systems for Multiple Sensitive Attributes

Ranking systems have an unprecedented influence on how and what informat...
research
09/01/2022

Fair mapping

To mitigate the effects of undesired biases in models, several approache...
research
06/22/2023

Mitigating Discrimination in Insurance with Wasserstein Barycenters

The insurance industry is heavily reliant on predictions of risks based ...

Please sign up or login with your details

Forgot password? Click here to reset