Sequence Generation: From Both Sides to the Middle

06/23/2019
by   Long Zhou, et al.
0

The encoder-decoder framework has achieved promising process for many sequence generation tasks, such as neural machine translation and text summarization. Such a framework usually generates a sequence token by token from left to right, hence (1) this autoregressive decoding procedure is time-consuming when the output sentence becomes longer, and (2) it lacks the guidance of future context which is crucial to avoid under translation. To alleviate these issues, we propose a synchronous bidirectional sequence generation (SBSG) model which predicts its outputs from both sides to the middle simultaneously. In the SBSG model, we enable the left-to-right (L2R) and right-to-left (R2L) generation to help and interact with each other by leveraging interactive bidirectional attention network. Experiments on neural machine translation (En-De, Ch-En, and En-Ro) and text summarization tasks show that the proposed model significantly speeds up decoding while improving the generation quality compared to the autoregressive Transformer.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/24/2019

Synchronous Bidirectional Inference for Neural Sequence Generation

In sequence to sequence generation tasks (e.g. machine translation and a...
research
08/25/2019

Efficient Bidirectional Neural Machine Translation

The encoder-decoder based neural machine translation usually generates a...
research
10/27/2020

Fast Interleaved Bidirectional Sequence Generation

Independence assumptions during sequence generation can speed up inferen...
research
09/01/2018

Beyond Error Propagation in Neural Machine Translation: Characteristics of Language Also Matter

Neural machine translation usually adopts autoregressive models and suff...
research
05/21/2023

A Framework for Bidirectional Decoding: Case Study in Morphological Inflection

Transformer-based encoder-decoder models that generate outputs in a left...
research
07/18/2019

Forward-Backward Decoding for Regularizing End-to-End TTS

Neural end-to-end TTS can generate very high-quality synthesized speech,...
research
11/05/2019

Improving Bidirectional Decoding with Dynamic Target Semantics in Neural Machine Translation

Generally, Neural Machine Translation models generate target words in a ...

Please sign up or login with your details

Forgot password? Click here to reset