Incremental Beam Manipulation for Natural Language Generation

by   James Hargreaves, et al.

The performance of natural language generation systems has improved substantially with modern neural networks. At test time they typically employ beam search to avoid locally optimal but globally suboptimal predictions. However, due to model errors, a larger beam size can lead to deteriorating performance according to the evaluation metric. For this reason, it is common to rerank the output of beam search, but this relies on beam search to produce a good set of hypotheses, which limits the potential gains. Other alternatives to beam search require changes to the training of the model, which restricts their applicability compared to beam search. This paper proposes incremental beam manipulation, i.e. reranking the hypotheses in the beam during decoding instead of only at the end. This way, hypotheses that are unlikely to lead to a good final output are discarded, and in their place hypotheses that would have been ignored will be considered instead. Applying incremental beam manipulation leads to an improvement of 1.93 and 5.82 BLEU points over vanilla beam search for the test sets of the E2E and WebNLG challenges respectively. The proposed method also outperformed a strong reranker by 1.04 BLEU points on the E2E challenge, while being on par with it on the WebNLG dataset.


page 1

page 2

page 3

page 4


When to Finish? Optimal Beam Search for Neural Text Generation (modulo beam size)

In neural text generation such as neural machine translation, summarizat...

Leveraging sentence similarity in natural language generation: Improving beam search using range voting

We propose a novel method for generating natural language sentences from...

Sequence Level Training with Recurrent Neural Networks

Many natural language processing applications use language models to gen...

Accelerating RNN Transducer Inference via One-Step Constrained Beam Search

We propose a one-step constrained (OSC) beam search to accelerate recurr...

On Hallucination and Predictive Uncertainty in Conditional Language Generation

Despite improvements in performances on different natural language gener...

Learning Optimal Tree Models Under Beam Search

Retrieving relevant targets from an extremely large target set under com...

Conformal Autoregressive Generation: Beam Search with Coverage Guarantees

We introduce two new extensions to the beam search algorithm based on co...

Please sign up or login with your details

Forgot password? Click here to reset