How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN

11/18/2021
by   R. Thomas McCoy, et al.
0

Current language models can generate high-quality text. Are they simply copying text they have seen before, or have they learned generalizable linguistic abstractions? To tease apart these possibilities, we introduce RAVEN, a suite of analyses for assessing the novelty of generated text, focusing on sequential structure (n-grams) and syntactic structure. We apply these analyses to four neural language models (an LSTM, a Transformer, Transformer-XL, and GPT-2). For local structure - e.g., individual dependencies - model-generated text is substantially less novel than our baseline of human-generated text from each model's test set. For larger-scale structure - e.g., overall sentence structure - model-generated text is as novel or even more novel than the human-generated baseline, but models still sometimes copy substantially, in some cases duplicating passages over 1,000 words long from the training set. We also perform extensive manual analysis showing that GPT-2's novel text is usually well-formed morphologically and syntactically but has reasonably frequent semantic issues (e.g., being self-contradictory).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/16/2022

Model Criticism for Long-Form Text Generation

Language models have demonstrated the ability to generate highly fluent ...
research
05/19/2023

Visualizing Linguistic Diversity of Text Datasets Synthesized by Large Language Models

Large language models (LLMs) can be used to generate smaller, more refin...
research
06/07/2023

Long-form analogies generated by chatGPT lack human-like psycholinguistic properties

Psycholinguistic analyses provide a means of evaluating large language m...
research
11/14/2022

Evade the Trap of Mediocrity: Promoting Diversity and Novelty in Text Generation via Concentrating Attention

Recently, powerful Transformer architectures have proven superior in gen...
research
10/08/2021

Text analysis and deep learning: A network approach

Much information available to applied researchers is contained within wr...
research
05/24/2023

KNN-LM Does Not Improve Open-ended Text Generation

In this paper, we study the generation quality of interpolation-based re...
research
08/18/2017

Assessing the Stylistic Properties of Neurally Generated Text in Authorship Attribution

Recent applications of neural language models have led to an increased i...

Please sign up or login with your details

Forgot password? Click here to reset