Language model acceptability judgements are not always robust to context

by   Koustuv Sinha, et al.

Targeted syntactic evaluations of language models ask whether models show stable preferences for syntactically acceptable content over minimal-pair unacceptable inputs. Most targeted syntactic evaluation datasets ask models to make these judgements with just a single context-free sentence as input. This does not match language models' training regime, in which input sentences are always highly contextualized by the surrounding corpus. This mismatch raises an important question: how robust are models' syntactic judgements in different contexts? In this paper, we investigate the stability of language models' performance on targeted syntactic evaluations as we vary properties of the input context: the length of the context, the types of syntactic phenomena it contains, and whether or not there are violations of grammaticality. We find that model judgements are generally robust when placed in randomly sampled linguistic contexts. However, they are substantially unstable for contexts containing syntactic structures matching those in the critical test content. Among all tested models (GPT-2 and five variants of OPT), we significantly improve models' judgements by providing contexts with matching syntactic structures, and conversely significantly worsen them using unacceptable contexts with matching but violated syntactic structures. This effect is amplified by the length of the context, except for unrelated inputs. We show that these changes in model performance are not explainable by simple features matching the context and the test inputs, such as lexical overlap and dependency overlap. This sensitivity to highly specific syntactic features of the context can only be explained by the models' implicit in-context learning abilities.


page 12

page 14


Causal Analysis of Syntactic Agreement Mechanisms in Neural Language Models

Targeted syntactic evaluations have demonstrated the ability of language...

Lost in the Middle: How Language Models Use Long Contexts

While recent language models have the ability to take long contexts as i...

Some of Them Can be Guessed! Exploring the Effect of Linguistic Context in Predicting Quantifiers

We study the role of linguistic context in predicting quantifiers (`few'...

Refining Targeted Syntactic Evaluation of Language Models

Targeted syntactic evaluation of subject-verb number agreement in Englis...

A Targeted Assessment of Incremental Processing in Neural LanguageModels and Humans

We present a targeted, scaled-up comparison of incremental processing in...

The Language Model Understood the Prompt was Ambiguous: Probing Syntactic Uncertainty Through Generation

Temporary syntactic ambiguities arise when the beginning of a sentence i...

Structural Supervision Improves Few-Shot Learning and Syntactic Generalization in Neural Language Models

Humans can learn structural properties about a word from minimal experie...

Please sign up or login with your details

Forgot password? Click here to reset