Is Your Classifier Actually Biased? Measuring Fairness under Uncertainty with Bernstein Bounds

04/26/2020
by   Kawin Ethayarajh, et al.
0

Most NLP datasets are not annotated with protected attributes such as gender, making it difficult to measure classification bias using standard measures of fairness (e.g., equal opportunity). However, manually annotating a large dataset with a protected attribute is slow and expensive. Instead of annotating all the examples, can we annotate a subset of them and use that sample to estimate the bias? While it is possible to do so, the smaller this annotated sample is, the less certain we are that the estimate is close to the true bias. In this work, we propose using Bernstein bounds to represent this uncertainty about the bias estimate as a confidence interval. We provide empirical evidence that a 95 bias. In quantifying this uncertainty, our method, which we call Bernstein-bounded unfairness, helps prevent classifiers from being deemed biased or unbiased when there is insufficient evidence to make either claim. Our findings suggest that the datasets currently used to measure specific biases are too small to conclusively identify bias except in the most egregious cases. For example, consider a co-reference resolution system that is 5 accurate on gender-stereotypical sentences – to claim it is biased with 95 confidence, we need a bias-specific dataset that is 3.8 times larger than WinoBias, the largest available.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset