Massively Multilingual Adversarial Speech Recognition

04/03/2019
by   Oliver Adams, et al.
0

We report on adaptation of multilingual end-to-end speech recognition models trained on as many as 100 languages. Our findings shed light on the relative importance of similarity between the target and pretraining languages along the dimensions of phonetics, phonology, language family, geographical location, and orthography. In this context, experiments demonstrate the effectiveness of two additional pretraining objectives in encouraging language-independent encoder representations: a context-independent phoneme objective paired with a language-adversarial classification objective.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset