On Learning Language-Invariant Representations for Universal Machine Translation

by   Han Zhao, et al.

The goal of universal machine translation is to learn to translate between any pair of languages, given a corpus of paired translated documents for a small subset of all pairs of languages. Despite impressive empirical results and an increasing interest in massively multilingual models, theoretical analysis on translation errors made by such universal machine translation models is only nascent. In this paper, we formally prove certain impossibilities of this endeavour in general, as well as prove positive results in the presence of additional (but natural) structure of data. For the former, we derive a lower bound on the translation error in the many-to-many translation setting, which shows that any algorithm aiming to learn shared sentence representations among multiple language pairs has to make a large translation error on at least one of the translation tasks, if no assumption on the structure of the languages is made. For the latter, we show that if the paired documents in the corpus follow a natural encoder-decoder generative process, we can expect a natural notion of “generalization”: a linear number of language pairs, rather than quadratic, suffices to learn a good representation. Our theory also explains what kinds of connection graphs between pairs of languages are better suited: ones with longer paths result in worse sample complexity in terms of the total number of documents per language pair needed. We believe our theoretical insights and implications contribute to the future algorithmic design of universal machine translation.


page 1

page 2

page 3

page 4


The ITU Faroese Pairs Dataset

This article documents a dataset of sentence pairs between Faroese and D...

Multilingual Machine Translation: Closing the Gap between Shared and Language-specific Encoder-Decoders

State-of-the-art multilingual machine translation relies on a universal ...

Knowledge Distillation for Multilingual Unsupervised Neural Machine Translation

Unsupervised neural machine translation (UNMT) has recently achieved rem...

Distributionally Robust Multilingual Machine Translation

Multilingual neural machine translation (MNMT) learns to translate multi...

(Self-Attentive) Autoencoder-based Universal Language Representation for Machine Translation

Universal language representation is the holy grail in machine translati...

SAHAAYAK 2023 – the Multi Domain Bilingual Parallel Corpus of Sanskrit to Hindi for Machine Translation

The data article presents the large bilingual parallel corpus of low-res...

Towards Interlingua Neural Machine Translation

A common intermediate language representation or an interlingua is the h...

Please sign up or login with your details

Forgot password? Click here to reset