Transfer learning of language-independent end-to-end ASR with language model fusion

11/06/2018
by   Hirofumi Inaguma, et al.
0

This work explores better adaptation methods to low-resource languages using an external language model (LM) under the framework of transfer learning. We first build a language-independent ASR system in a unified sequence-to-sequence (S2S) architecture with a shared vocabulary among all languages. During adaptation, we perform LM fusion transfer, where an external LM is integrated into the decoder network of the attention-based S2S model in the whole adaptation stage, to effectively incorporate linguistic context of the target language. We also investigate various seed models for transfer learning. Experimental evaluations using the IARPA BABEL data set show that LM fusion transfer improves performances on all target five languages compared with simple transfer learning when the external text data is available. Our final system drastically reduces the performance gap from the hybrid systems.

READ FULL TEXT
research
06/09/2020

Improving Cross-Lingual Transfer Learning for End-to-End Speech Recognition with Speech Translation

Transfer learning from high-resource languages is known to be an efficie...
research
11/06/2018

Language model integration based on memory control for sequence to sequence speech recognition

In this paper, we explore several new schemes to train a seq2seq model t...
research
05/21/2020

Leveraging Text Data Using Hybrid Transformer-LSTM Based End-to-End ASR in Transfer Learning

In this work, we study leveraging extra text data to improve low-resourc...
research
06/14/2021

Overcoming Domain Mismatch in Low Resource Sequence-to-Sequence ASR Models using Hybrid Generated Pseudotranscripts

Sequence-to-sequence (seq2seq) models are competitive with hybrid models...
research
10/04/2018

Multilingual sequence-to-sequence speech recognition: architecture, transfer learning, and language modeling

Sequence-to-sequence (seq2seq) approach for low-resource ASR is a relati...
research
08/04/2018

Language Model Supervision for Handwriting Recognition Model Adaptation

Training state-of-the-art offline handwriting recognition (HWR) models r...
research
10/29/2020

Memory Attentive Fusion: External Language Model Integration for Transformer-based Sequence-to-Sequence Model

This paper presents a novel fusion method for integrating an external la...

Please sign up or login with your details

Forgot password? Click here to reset