Hierarchical Representation in Neural Language Models: Suppression and Recovery of Expectations

by   Ethan Wilcox, et al.

Deep learning sequence models have led to a marked increase in performance for a range of Natural Language Processing tasks, but it remains an open question whether they are able to induce proper hierarchical generalizations for representing natural language from linear input alone. Work using artificial languages as training input has shown that LSTMs are capable of inducing the stack-like data structures required to represent context-free and certain mildly context-sensitive languages---formal language classes which correspond in theory to the hierarchical structures of natural language. Here we present a suite of experiments probing whether neural language models trained on linguistic data induce these stack-like data structures and deploy them while incrementally predicting words. We study two natural language phenomena: center embedding sentences and syntactic island constraints on the filler--gap dependency. In order to properly predict words in these structures, a model must be able to temporarily suppress certain expectations and then recover those expectations later, essentially pushing and popping these expectations on a stack. Our results provide evidence that models can successfully suppress and recover expectations in many cases, but do not fully recover their previous grammatical state.


Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models

Relations between words are governed by hierarchical structure rather th...

Automated Interviewer or Augmented Survey? Collecting Social Data with Large Language Models

Qualitative methods like interviews produce richer data in comparison wi...

What Syntactic Structures block Dependencies in RNN Language Models?

Recurrent Neural Networks (RNNs) trained on a language modeling task hav...

One Model for the Learning of Language

A major target of linguistics and cognitive science has been to understa...

Do language models make human-like predictions about the coreferents of Italian anaphoric zero pronouns?

Some languages allow arguments to be omitted in certain contexts. Yet hu...

Representation of Constituents in Neural Language Models: Coordination Phrase as a Case Study

Neural language models have achieved state-of-the-art performances on ma...

Evaluating Models of Robust Word Recognition with Serial Reproduction

Spoken communication occurs in a "noisy channel" characterized by high l...

Please sign up or login with your details

Forgot password? Click here to reset