M-Adapter: Modality Adaptation for End-to-End Speech-to-Text Translation

07/03/2022
by   Jinming Zhao, et al.
0

End-to-end speech-to-text translation models are often initialized with pre-trained speech encoder and pre-trained text decoder. This leads to a significant training gap between pre-training and fine-tuning, largely due to the modality differences between speech outputs from the encoder and text inputs to the decoder. In this work, we aim to bridge the modality gap between speech and text to improve translation quality. We propose M-Adapter, a novel Transformer-based module, to adapt speech representations to text. While shrinking the speech sequence, M-Adapter produces features desired for speech-to-text translation via modelling global and local dependencies of a speech sequence. Our experimental results show that our model outperforms a strong baseline by up to 1 BLEU score on the Must-C En→DE dataset.[Our code is available at https://github.com/mingzi151/w2v2-st.]

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset