Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation

11/10/2019
by   Emily Dinan, et al.
0

Models often easily learn biases present in the training data, and their predictions directly reflect this bias. We analyze the presence of gender bias in dialogue and examine the subsequent effect on generative chitchat dialogue models. Based on this analysis, we propose a combination of three techniques to mitigate bias: counterfactual data augmentation, targeted data collection, and conditional training. We focus on the multi-player text-based fantasy adventure dataset LIGHT as a testbed for our work. LIGHT contains gender imbalance between male and female characters with around 1.6 times as many male characters, likely because it is entirely collected by crowdworkers and reflects common biases that exist in fantasy or medieval settings. We show that (i) our proposed techniques mitigate gender bias by balancing the genderedness of generated dialogue utterances; and (ii) they work particularly well in combination. Further, we show through various metrics—such as quantity of gendered words, a dialogue safety classifier, and human evaluation—that our models generate less gendered, but still engaging chitchat responses.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset