The USTC-NEL Speech Translation system at IWSLT 2018

12/06/2018
by   Dan Liu, et al.
0

This paper describes the USTC-NEL system to the speech translation task of the IWSLT Evaluation 2018. The system is a conventional pipeline system which contains 3 modules: speech recognition, post-processing and machine translation. We train a group of hybrid-HMM models for our speech recognition, and for machine translation we train transformer based neural machine translation models with speech recognition output style text as input. Experiments conducted on the IWSLT 2018 task indicate that, compared to baseline system from KIT, our system achieved 14.9 BLEU improvement.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/25/2019

Breaking the Data Barrier: Towards Robust Speech Translation via Adversarial Stability Training

In a pipeline speech translation system, automatic speech recognition (A...
research
10/24/2018

The MeMAD Submission to the IWSLT 2018 Speech Translation Task

This paper describes the MeMAD project entry to the IWSLT Speech Transla...
research
05/21/2019

Improving Minimal Gated Unit for Sequential Data

In order to obtain a model which can process sequential data related to ...
research
07/03/2017

Dual Supervised Learning

Many supervised learning tasks are emerged in dual forms, e.g., English-...
research
08/10/2020

Subword Regularization: An Analysis of Scalability and Generalization for End-to-End Automatic Speech Recognition

Subwords are the most widely used output units in end-to-end speech reco...
research
09/18/2015

A Light Sliding-Window Part-of-Speech Tagger for the Apertium Free/Open-Source Machine Translation Platform

This paper describes a free/open-source implementation of the light slid...
research
03/15/2021

Towards the evaluation of simultaneous speech translation from a communicative perspective

In recent years, machine speech-to-speech and speech-to-text translation...

Please sign up or login with your details

Forgot password? Click here to reset