Practical Compositional Fairness: Understanding Fairness in Multi-Task ML Systems

11/05/2019
by   Xuezhi Wang, et al.
23

Most literature in fairness has focused on improving fairness with respect to one single model or one single objective. However, real-world machine learning systems are usually composed of many different components. Unfortunately, recent research has shown that even if each component is “fair,” the overall system can still be “unfair”<cit.>. In this paper, we focus on how well fairness composes over multiple components in real systems. We consider two recently proposed fairness metrics for rankings: exposure and pairwise ranking accuracy gap. We provide theory that demonstrates a set of conditions under which fairness of individual models does compose. We then present an analytical framework for both understanding whether a system's signals can achieve compositional fairness, and diagnosing which of these signals lowers the overall system's end-to-end fairness the most. Despite previously bleak theoretical results, on multiple data-sets—including a large-scale real-world recommender system—we find that the overall system's end-to-end fairness is largely achievable by improving fairness in individual components.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset