Inducing Language-Agnostic Multilingual Representations
Multilingual representations have the potential to make cross-lingual systems available to the vast majority of languages in the world. However, they currently require large pretraining corpora, or assume access to typologically similar languages. In this work, we address these obstacles by removing language identity signals from multilingual embeddings. We examine three approaches for this: 1) re-aligning the vector spaces of target languages (all together) to a pivot source language; 2) removing languages-specific means and variances, which yields better discriminativeness of embeddings as a by-product; and 3) normalizing input texts by removing morphological contractions and sentence reordering, thus yielding language-agnostic representations. We evaluate on the tasks of XNLI and reference-free MT evaluation of varying difficulty across 19 selected languages. Our experiments demonstrate the language-agnostic behavior of our multilingual representations, which manifest the potential of zero-shot cross-lingual transfer to distant and low-resource languages, and decrease the performance gap by 8.9 points (M-BERT) and 18.2 points (XLM-R) on average across all tasks and languages. We make our codes and models available.
READ FULL TEXT