Same Same, But Different: Conditional Multi-Task Learning for Demographic-Specific Toxicity Detection

by   Soumyajit Gupta, et al.

Algorithmic bias often arises as a result of differential subgroup validity, in which predictive relationships vary across groups. For example, in toxic language detection, comments targeting different demographic groups can vary markedly across groups. In such settings, trained models can be dominated by the relationships that best fit the majority group, leading to disparate performance. We propose framing toxicity detection as multi-task learning (MTL), allowing a model to specialize on the relationships that are relevant to each demographic group while also leveraging shared properties across groups. With toxicity detection, each task corresponds to identifying toxicity against a particular demographic group. However, traditional MTL requires labels for all tasks to be present for every data point. To address this, we propose Conditional MTL (CondMTL), wherein only training examples relevant to the given demographic group are considered by the loss function. This lets us learn group specific representations in each branch which are not cross contaminated by irrelevant labels. Results on synthetic and real data show that using CondMTL improves predictive recall over various baselines in general and for the minority demographic group in particular, while having similar overall accuracy.


page 9

page 17


Generalizing Fairness using Multi-Task Learning without Demographic Information

To ensure the fairness of machine learning systems, we can include a fai...

Centering the Margins: Outlier-Based Identification of Harmed Populations in Toxicity Detection

A standard method for measuring the impacts of AI on marginalized commun...

Reinforcement Guided Multi-Task Learning Framework for Low-Resource Stereotype Detection

As large Pre-trained Language Models (PLMs) trained on large amounts of ...

When the Majority is Wrong: Leveraging Annotator Disagreement for Subjective Tasks

Though majority vote among annotators is typically used for ground truth...

Fair Bayes-Optimal Classifiers Under Predictive Parity

Increasing concerns about disparate effects of AI have motivated a great...

Multi-Task Networks With Universe, Group, and Task Feature Learning

We present methods for multi-task learning that take advantage of natura...

Behind the Mask: Demographic bias in name detection for PII masking

Many datasets contain personally identifiable information, or PII, which...

Please sign up or login with your details

Forgot password? Click here to reset