Syntactic Surprisal From Neural Models Predicts, But Underestimates, Human Processing Difficulty From Syntactic Ambiguities

10/21/2022
by   Suhas Arehalli, et al.
0

Humans exhibit garden path effects: When reading sentences that are temporarily structurally ambiguous, they slow down when the structure is disambiguated in favor of the less preferred alternative. Surprisal theory (Hale, 2001; Levy, 2008), a prominent explanation of this finding, proposes that these slowdowns are due to the unpredictability of each of the words that occur in these sentences. Challenging this hypothesis, van Schijndel Linzen (2021) find that estimates of the cost of word predictability derived from language models severely underestimate the magnitude of human garden path effects. In this work, we consider whether this underestimation is due to the fact that humans weight syntactic factors in their predictions more highly than language models do. We propose a method for estimating syntactic predictability from a language model, allowing us to weigh the cost of lexical and syntactic predictability independently. We find that treating syntactic predictability independently from lexical predictability indeed results in larger estimates of garden path. At the same time, even when syntactic predictability is independently weighted, surprisal still greatly underestimate the magnitude of human garden path effects. Our results support the hypothesis that predictability is not the only factor responsible for the processing cost associated with garden path sentences.

READ FULL TEXT
research
06/06/2021

A Targeted Assessment of Incremental Processing in Neural LanguageModels and Humans

We present a targeted, scaled-up comparison of incremental processing in...
research
09/30/2021

Syntactic Persistence in Language Models: Priming as a Window into Abstract Language Representations

We investigate the extent to which modern, neural language models are su...
research
10/25/2022

Dual Mechanism Priming Effects in Hindi Word Order

Word order choices during sentence production can be primed by preceding...
research
06/09/2023

Language Models Can Learn Exceptions to Syntactic Rules

Artificial neural networks can generalize productively to novel contexts...
research
08/29/2018

A Neural Model of Adaptation in Reading

It has been argued that humans rapidly adapt their lexical and syntactic...
research
02/08/2022

Do Language Models Learn Position-Role Mappings?

How is knowledge of position-role mappings in natural language learned? ...

Please sign up or login with your details

Forgot password? Click here to reset