Investigating Practices and Opportunities for Cross-functional Collaboration around AI Fairness in Industry Practice

by   Wesley Hanwen Deng, et al.

An emerging body of research indicates that ineffective cross-functional collaboration – the interdisciplinary work done by industry practitioners across roles – represents a major barrier to addressing issues of fairness in AI design and development. In this research, we sought to better understand practitioners' current practices and tactics to enact cross-functional collaboration for AI fairness, in order to identify opportunities to support more effective collaboration. We conducted a series of interviews and design workshops with 23 industry practitioners spanning various roles from 17 companies. We found that practitioners engaged in bridging work to overcome frictions in understanding, contextualization, and evaluation around AI fairness across roles. In addition, in organizational contexts with a lack of resources and incentives for fairness work, practitioners often piggybacked on existing requirements (e.g., for privacy assessments) and AI development norms (e.g., the use of quantitative evaluation metrics), although they worry that these tactics may be fundamentally compromised. Finally, we draw attention to the invisible labor that practitioners take on as part of this bridging and piggybacking work to enact interdisciplinary collaboration for fairness. We close by discussing opportunities for both FAccT researchers and AI practitioners to better support cross-functional collaboration for fairness in the design and development of AI systems.


page 1

page 2

page 3

page 4


Exploring How Machine Learning Practitioners (Try To) Use Fairness Toolkits

Recent years have seen the development of many open-source ML fairness t...

Assessing the Fairness of AI Systems: AI Practitioners' Processes, Challenges, and Needs for Support

Various tools and practices have been developed to support practitioners...

Understanding Practices, Challenges, and Opportunities for User-Driven Algorithm Auditing in Industry Practice

Recent years have seen growing interest among both researchers and pract...

Going public: the role of public participation approaches in commercial AI labs

In recent years, discussions of responsible AI practices have seen growi...

A Novel Methodology For Crowdsourcing AI Models in an Enterprise

The evolution of AI is advancing rapidly, creating both challenges and o...

Fairlearn: Assessing and Improving Fairness of AI Systems

Fairlearn is an open source project to help practitioners assess and imp...

RecSys Fairness Metrics: Many to Use But Which One To Choose?

In recent years, recommendation and ranking systems have become increasi...

Please sign up or login with your details

Forgot password? Click here to reset