Evaluating Metrics for Bias in Word Embeddings

by   Sarah Schröder, et al.

Over the last years, word and sentence embeddings have established as text preprocessing for all kinds of NLP tasks and improved the performances significantly. Unfortunately, it has also been shown that these embeddings inherit various kinds of biases from the training data and thereby pass on biases present in society to NLP solutions. Many papers attempted to quantify bias in word or sentence embeddings to evaluate debiasing methods or compare different embedding models, usually with cosine-based metrics. However, lately some works have raised doubts about these metrics showing that even though such metrics report low biases, other tests still show biases. In fact, there is a great variety of bias metrics or tests proposed in the literature without any consensus on the optimal solutions. Yet we lack works that evaluate bias metrics on a theoretical level or elaborate the advantages and disadvantages of different bias metrics. In this work, we will explore different cosine based bias metrics. We formalize a bias definition based on the ideas from previous works and derive conditions for bias metrics. Furthermore, we thoroughly investigate the existing cosine-based metrics and their limitations to show why these metrics can fail to report biases in some cases. Finally, we propose a new metric, SAME, to address the shortcomings of existing metrics and mathematically prove that SAME behaves appropriately.


page 1

page 2

page 3

page 4


The SAME score: Improved cosine based bias score for word embeddings

Over the last years, word and sentence embeddings have established as te...

Measuring Bias in Contextualized Word Representations

Contextual word embeddings such as BERT have achieved state of the art p...

Local Law 144: A Critical Analysis of Regression Metrics

The use of automated decision tools in recruitment has received an incre...

Social Biases in Automatic Evaluation Metrics for NLG

Many studies have revealed that word embeddings, language models, and mo...

On the interpretation and significance of bias metrics in texts: a PMI-based approach

In recent years, the use of word embeddings has become popular to measur...

Unmasking the Mask – Evaluating Social Biases in Masked Language Models

Masked Language Models (MLMs) have shown superior performances in numero...

Limitations of Pinned AUC for Measuring Unintended Bias

This report examines the Pinned AUC metric introduced and highlights som...

Please sign up or login with your details

Forgot password? Click here to reset