Scaling Up Deliberation for Multilingual ASR

10/11/2022
by   Ke Hu, et al.
0

Multilingual end-to-end automatic speech recognition models are attractive due to its simplicity in training and deployment. Recent work on large-scale training of such models has shown promising results compared to monolingual models. However, the work often focuses on multilingual models themselves in a single-pass setup. In this work, we investigate second-pass deliberation for multilingual speech recognition. Our proposed deliberation is multilingual, i.e., the text encoder encodes hypothesis text from multiple languages, and the decoder attends to multilingual text and audio. We investigate scaling the deliberation text encoder and decoder, and compare scaling the deliberation decoder and the first-pass cascaded encoder. We show that deliberation improves the average WER on 9 languages by 4 model. By increasing the size of the deliberation up to 1B parameters, the average WER improvement increases to 9 Our deliberation rescorer is based on transformer layers and can be parallelized during rescoring.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset