Coherence boosting: When your pretrained language model is not paying enough attention

03/17/2022
by   malkin1729, et al.
1

Long-range semantic coherence remains a challenge in automatic language generation and understanding. We demonstrate that large language models have insufficiently learned the effect of distant words on next-token prediction. We present coherence boosting, an inference procedure that increases a LM's focus on a long context. We show the benefits of coherence boosting with pretrained models by distributional analyses of generated ordinary text and dialog responses. It is also found that coherence boosting with state-of-the-art models for various zero-shot NLP tasks yields performance gains with no additional training.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/15/2021

Boosting coherence of language models

Naturality of long-term information structure – coherence – remains a ch...
research
09/07/2020

Improving Language Generation with Sentence Coherence Objective

Conditional story generation and contextual text continuation have becom...
research
03/16/2022

Geographic Adaptation of Pretrained Language Models

Geographic linguistic features are commonly used to improve the performa...
research
12/12/2020

AffectON: Incorporating Affect Into Dialog Generation

Due to its expressivity, natural language is paramount for explicit and ...
research
02/03/2022

Towards Coherent and Consistent Use of Entities in Narrative Generation

Large pre-trained language models (LMs) have demonstrated impressive cap...
research
05/08/2023

Coherent Wave Dynamics and Language Generation of a Generative Pre-trained Transformer

Large Language Models (LLMs), such as the Generative Pretrained Transfor...
research
07/07/2021

Can Transformer Models Measure Coherence In Text? Re-Thinking the Shuffle Test

The Shuffle Test is the most common task to evaluate whether NLP models ...

Please sign up or login with your details

Forgot password? Click here to reset