Are Two Heads the Same as One? Identifying Disparate Treatment in Fair Neural Networks

by   Michael Lohaus, et al.

We show that deep neural networks that satisfy demographic parity do so through a form of race or gender awareness, and that the more we force a network to be fair, the more accurately we can recover race or gender from the internal state of the network. Based on this observation, we propose a simple two-stage solution for enforcing fairness. First, we train a two-headed network to predict the protected attribute (such as race or gender) alongside the original task, and second, we enforce demographic parity by taking a weighted sum of the heads. In the end, this approach creates a single-headed network with the same backbone architecture as the original network. Our approach has near identical performance compared to existing regularization-based or preprocessing methods, but has greater stability and higher accuracy where near exact demographic parity is required. To cement the relationship between these two approaches, we show that an unfair and optimally accurate classifier can be recovered by taking a weighted sum of a fair classifier and a classifier predicting the protected attribute. We use this to argue that both the fairness approaches and our explicit formulation demonstrate disparate treatment and that, consequentially, they are likely to be unlawful in a wide range of scenarios under the US law.


page 6

page 20

page 21

page 22


Effectiveness of Equalized Odds for Fair Classification under Imperfect Group Information

Most approaches for ensuring or improving a model's fairness with respec...

Counterfactual Fairness Is Basically Demographic Parity

Making fair decisions is crucial to ethically implementing machine learn...

A Neural Network Framework for Fair Classifier

Machine learning models are extensively being used in decision making, e...

Paired-Consistency: An Example-Based Model-Agnostic Approach to Fairness Regularization in Machine Learning

As AI systems develop in complexity it is becoming increasingly hard to ...

Multi-Fair Pareto Boosting

Fairness-aware machine learning for multiple protected at-tributes (refe...

Improving Smiling Detection with Race and Gender Diversity

Recent progress in deep learning has been accompanied by a growing conce...

Intersectionality and Testimonial Injustice in Medical Records

Detecting testimonial injustice is an essential element of addressing in...

Please sign up or login with your details

Forgot password? Click here to reset