Mitigating Bias in Conversations: A Hate Speech Classifier and Debiaser with Prompts

07/14/2023
by   Shaina Raza, et al.
0

Discriminatory language and biases are often present in hate speech during conversations, which usually lead to negative impacts on targeted groups such as those based on race, gender, and religion. To tackle this issue, we propose an approach that involves a two-step process: first, detecting hate speech using a classifier, and then utilizing a debiasing component that generates less biased or unbiased alternatives through prompts. We evaluated our approach on a benchmark dataset and observed reduction in negativity due to hate speech comments. The proposed method contributes to the ongoing efforts to reduce biases in online discourse and promote a more inclusive and fair environment for communication.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset