Disentangling and Operationalizing AI Fairness at LinkedIn

Operationalizing AI fairness at LinkedIn's scale is challenging not only because there are multiple mutually incompatible definitions of fairness but also because determining what is fair depends on the specifics and context of the product where AI is deployed. Moreover, AI practitioners need clarity on what fairness expectations need to be addressed at the AI level. In this paper, we present the evolving AI fairness framework used at LinkedIn to address these three challenges. The framework disentangles AI fairness by separating out equal treatment and equitable product expectations. Rather than imposing a trade-off between these two commonly opposing interpretations of fairness, the framework provides clear guidelines for operationalizing equal AI treatment complemented with a product equity strategy. This paper focuses on the equal AI treatment component of LinkedIn's AI fairness framework, shares the principles that support it, and illustrates their application through a case study. We hope this paper will encourage other big tech companies to join us in sharing their approach to operationalizing AI fairness at scale, so that together we can keep advancing this constantly evolving field.


page 6

page 14

page 15


The Myth of Complete AI-Fairness

The idea of fairness and justice has long and deep roots in Western civi...

Using Edge Cases to Disentangle Fairness and Solidarity in AI Ethics

Principles of fairness and solidarity in AI ethics regularly overlap, cr...

Explaining how your AI system is fair

To implement fair machine learning in a sustainable way, choosing the ri...

AI Fairness: from Principles to Practice

This paper summarizes and evaluates various approaches, methods, and tec...

Closed-Loop View of the Regulation of AI: Equal Impact across Repeated Interactions

There has been much recent interest in the regulation of AI. We argue fo...

The Managerial Effects of Algorithmic Fairness Activism

How do ethical arguments affect AI adoption in business? We randomly exp...

Tensions Between the Proxies of Human Values in AI

Motivated by mitigating potentially harmful impacts of technologies, the...

Please sign up or login with your details

Forgot password? Click here to reset