Error Parity Fairness: Testing for Group Fairness in Regression Tasks

by   Furkan Gursoy, et al.

The applications of Artificial Intelligence (AI) surround decisions on increasingly many aspects of human lives. Society responds by imposing legal and social expectations for the accountability of such automated decision systems (ADSs). Fairness, a fundamental constituent of AI accountability, is concerned with just treatment of individuals and sensitive groups (e.g., based on sex, race). While many studies focus on fair learning and fairness testing for the classification tasks, the literature is rather limited on how to examine fairness in regression tasks. This work presents error parity as a regression fairness notion and introduces a testing methodology to assess group fairness based on a statistical hypothesis testing procedure. The error parity test checks whether prediction errors are distributed similarly across sensitive groups to determine if an ADS is fair. It is followed by a suitable permutation test to compare groups on several statistics to explore disparities and identify impacted groups. The usefulness and applicability of the proposed methodology are demonstrated via a case study on COVID-19 projections in the US at the county level, which revealed race-based differences in forecast errors. Overall, the proposed regression fairness testing methodology fills a gap in the fair machine learning literature and may serve as a part of larger accountability assessments and algorithm audits.


page 1

page 2

page 3

page 4


Equal Confusion Fairness: Measuring Group-Based Disparities in Automated Decision Systems

As artificial intelligence plays an increasingly substantial role in dec...

Within-group fairness: A guidance for more sound between-group fairness

As they have a vital effect on social decision-making, AI algorithms not...

Evaluating Fairness Using Permutation Tests

Machine learning models are central to people's lives and impact society...

Towards Threshold Invariant Fair Classification

Effective machine learning models can automatically learn useful informa...

The Minimum Wage as an Anchor: Effects on Determinations of Fairness by Humans and AI

I study the role of minimum wage as an anchor for judgements of the fair...

Achieving Equalized Odds by Resampling Sensitive Attributes

We present a flexible framework for learning predictive models that appr...

FAIROD: Fairness-aware Outlier Detection

Fairness and Outlier Detection (OD) are closely related, as it is exactl...

Please sign up or login with your details

Forgot password? Click here to reset