On Hate Scaling Laws For Data-Swamps

by   Abeba Birhane, et al.

`Scale the model, scale the data, scale the GPU-farms' is the reigning sentiment in the world of generative AI today. While model scaling has been extensively studied, data scaling and its downstream impacts remain under explored. This is especially of critical importance in the context of visio-linguistic datasets whose main source is the World Wide Web, condensed and packaged as the CommonCrawl dump. This large scale data-dump, which is known to have numerous drawbacks, is repeatedly mined and serves as the data-motherlode for large generative models. In this paper, we: 1) investigate the effect of scaling datasets on hateful content through a comparative audit of the LAION-400M and LAION-2B-en, containing 400 million and 2 billion samples respectively, and 2) evaluate the downstream impact of scale on visio-linguistic models trained on these dataset variants by measuring racial bias of the models trained on them using the Chicago Face Dataset (CFD) as a probe. Our results show that 1) the presence of hateful content in datasets, when measured with a Hate Content Rate (HCR) metric on the inferences of the Pysentimiento hate-detection Natural Language Processing (NLP) model, increased by nearly 12% and 2) societal biases and negative stereotypes were also exacerbated with scale on the models we evaluated. As scale increased, the tendency of the model to associate images of human faces with the `human being' class over 7 other offensive classes reduced by half. Furthermore, for the Black female category, the tendency of the model to associate their faces with the `criminal' class doubled, while quintupling for Black male faces. We present a qualitative and historical analysis of the model audit results, reflect on our findings and its implications for dataset curation practice, and close with a summary of our findings and potential future work to be done in this area.


page 9

page 10

page 11

page 12

page 14

page 15

page 25

page 26


Multimodal datasets: misogyny, pornography, and malignant stereotypes

We have now entered the era of trillion parameter machine learning model...

Reproducible scaling laws for contrastive language-image learning

Scaling up neural networks has led to remarkable performance across a wi...

Unmasking Nationality Bias: A Study of Human Perception of Nationalities in AI-Generated Articles

We investigate the potential for nationality biases in natural language ...

Scaling Laws Do Not Scale

Recent work has proposed a power law relationship, referred to as “scali...

Stubborn Lexical Bias in Data and Models

In NLP, recent work has seen increased focus on spurious correlations be...

Bias Against 93 Stigmatized Groups in Masked Language Models and Downstream Sentiment Classification Tasks

The rapid deployment of artificial intelligence (AI) models demands a th...

Phylogenetic signal in phonotactics

Phylogenetic methods have broad potential in linguistics beyond tree inf...

Please sign up or login with your details

Forgot password? Click here to reset