Reward Modeling for Mitigating Toxicity in Transformer-based Language Models

02/19/2022
by   Farshid Faal, et al.
0

Transformer-based language models are able to generate fluent text and be efficiently adapted across various natural language generation tasks. However, language models that are pretrained on large unlabeled web text corpora have been shown to suffer from degenerating toxic content and social bias behaviors, consequently hindering their safe deployment. Various detoxification methods were proposed to mitigate the language model's toxicity; however, these methods struggled to detoxify language models when conditioned on prompts that contain specific social identities related to gender, race, or religion. In this study, we propose Reinforce-Detoxify; A reinforcement learning-based method for mitigating toxicity in language models. We address the challenge of safety in language models and propose a new reward model that is able to detect toxic content and mitigate unintended bias towards social identities in toxicity prediction. The experiments demonstrate that the Reinforce-Detoxify method for language model detoxification outperforms existing detoxification approaches in automatic evaluation metrics, indicating the ability of our approach in language model detoxification and less prone to unintended bias toward social identities in generated content.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/14/2023

LLM Self Defense: By Self Examination, LLMs Know They Are Being Tricked

Large language models (LLMs) have skyrocketed in popularity in recent ye...
research
10/12/2021

Deep Learning for Bias Detection: From Inception to Deployment

To create a more inclusive workplace, enterprises are actively investing...
research
06/30/2023

Queer People are People First: Deconstructing Sexual Identity Stereotypes in Large Language Models

Large Language Models (LLMs) are trained primarily on minimally processe...
research
03/10/2023

Rewarding Chatbots for Real-World Engagement with Millions of Users

The emergence of pretrained large language models has led to the deploym...
research
10/26/2020

PowerTransformer: Unsupervised Controllable Revision for Biased Language Correction

Unconscious biases continue to be prevalent in modern text and media, ca...
research
08/04/2021

Mitigating harm in language models with conditional-likelihood filtration

Language models trained on large-scale unfiltered datasets curated from ...
research
11/21/2022

Validating Large Language Models with ReLM

Although large language models (LLMs) have been touted for their ability...

Please sign up or login with your details

Forgot password? Click here to reset