Gender Bias in Multilingual Neural Machine Translation: The Architecture Matters

12/24/2020
by   Marta R. Costa-Jussà, et al.
21

Multilingual Neural Machine Translation architectures mainly differ in the amount of sharing modules and parameters among languages. In this paper, and from an algorithmic perspective, we explore if the chosen architecture, when trained with the same data, influences the gender bias accuracy. Experiments in four language pairs show that Language-Specific encoders-decoders exhibit less bias than the Shared encoder-decoder architecture. Further interpretability analysis of source embeddings and the attention shows that, in the Language-Specific case, the embeddings encode more gender information, and its attention is more diverted. Both behaviors help in mitigating gender bias.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset