Awareness in Practice: Tensions in Access to Sensitive Attribute Data for Antidiscrimination

12/12/2019
by   Miranda Bogen, et al.
0

Organizations cannot address demographic disparities that they cannot see. Recent research on machine learning and fairness has emphasized that awareness of sensitive attributes, such as race and sex, is critical to the development of interventions. However, on the ground, the existence of these data cannot be taken for granted. This paper uses the domains of employment, credit, and healthcare in the United States to surface conditions that have shaped the availability of sensitive attribute data. For each domain, we describe how and when private companies collect or infer sensitive attribute data for antidiscrimination purposes. An inconsistent story emerges: Some companies are required by law to collect sensitive attribute data, while others are prohibited from doing so. Still others, in the absence of legal mandates, have determined that collection and imputation of these data are appropriate to address disparities. This story has important implications for fairness research and its future applications. If companies that mediate access to life opportunities are unable or hesitant to collect or infer sensitive attribute data, then proposed techniques to detect and mitigate bias in machine learning models might never be implemented outside the lab. We conclude that today's legal requirements and corporate practices, while highly inconsistent across domains, offer lessons for how to approach the collection and inference of sensitive data in appropriate circumstances. We urge stakeholders, including machine learning practitioners, to actively help chart a path forward that takes both policy goals and technical needs into account.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/30/2020

"What We Can't Measure, We Can't Understand": Challenges to Demographic Data Procurement in the Pursuit of Fairness

As calls for fair and unbiased algorithmic systems increase, so too does...
research
02/04/2022

Dikaios: Privacy Auditing of Algorithmic Fairness via Attribute Inference Attacks

Machine learning (ML) models have been deployed for high-stakes applicat...
research
10/05/2019

The Impact of Data Preparation on the Fairness of Software Systems

Machine learning models are widely adopted in scenarios that directly af...
research
11/18/2022

Leveraging Algorithmic Fairness to Mitigate Blackbox Attribute Inference Attacks

Machine learning (ML) models have been deployed for high-stakes applicat...
research
09/10/2021

Fairness without the sensitive attribute via Causal Variational Autoencoder

In recent years, most fairness strategies in machine learning models foc...
research
02/16/2021

Evaluating Fairness of Machine Learning Models Under Uncertain and Incomplete Information

Training and evaluation of fair classifiers is a challenging problem. Th...
research
11/15/2020

Towards Compliant Data Management Systems for Healthcare ML

The increasing popularity of machine learning approaches and the rising ...

Please sign up or login with your details

Forgot password? Click here to reset