Practical Approaches for Fair Learning with Multitype and Multivariate Sensitive Attributes

11/11/2022
by   Tennison Liu, et al.
5

It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences. Fair ML has largely focused on the protection of single attributes in the simpler setting where both attributes and target outcomes are binary. However, the practical application in many a real-world problem entails the simultaneous protection of multiple sensitive attributes, which are often not simply binary, but continuous or categorical. To address this more challenging task, we introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces. This leads to two practical tools: first, the FairCOCCO Score, a normalised metric that can quantify fairness in settings with single or multiple sensitive attributes of arbitrary type; and second, a subsequent regularisation term that can be incorporated into arbitrary learning objectives to obtain fair predictors. These contributions address crucial gaps in the algorithmic fairness literature, and we empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/13/2022

Locating disparities in machine learning

Machine learning was repeatedly proven to provide predictions with dispa...
research
05/24/2021

MultiFair: Multi-Group Fairness in Machine Learning

Algorithmic fairness is becoming increasingly important in data mining a...
research
05/20/2020

Fair Outlier Detection

An outlier detection method may be considered fair over specified sensit...
research
03/23/2021

Promoting Fairness through Hyperparameter Optimization

Considerable research effort has been guided towards algorithmic fairnes...
research
02/16/2023

Group Fairness with Uncertainty in Sensitive Attributes

We consider learning a fair predictive model when sensitive attributes a...
research
04/11/2018

When optimizing nonlinear objectives is no harder than linear objectives

Most systems and learning algorithms optimize average performance or ave...
research
09/12/2023

A Sequentially Fair Mechanism for Multiple Sensitive Attributes

In the standard use case of Algorithmic Fairness, the goal is to elimina...

Please sign up or login with your details

Forgot password? Click here to reset