Can RNNs learn Recursive Nested Subject-Verb Agreements?

by   Yair Lakretz, et al.

One of the fundamental principles of contemporary linguistics states that language processing requires the ability to extract recursively nested tree structures. However, it remains unclear whether and how this code could be implemented in neural circuits. Recent advances in Recurrent Neural Networks (RNNs), which achieve near-human performance in some language tasks, provide a compelling model to address such questions. Here, we present a new framework to study recursive processing in RNNs, using subject-verb agreement as a probe into the representations of the neural network. We trained six distinct types of RNNs on a simplified probabilistic context-free grammar designed to independently manipulate the length of a sentence and the depth of its syntactic tree. All RNNs generalized to subject-verb dependencies longer than those seen during training. However, none systematically generalized to deeper tree structures, even those with a structural bias towards learning nested tree (i.e., stack-RNNs). In addition, our analyses revealed primacy and recency effects in the generalization patterns of LSTM-based models, showing that these models tend to perform well on the outer- and innermost parts of a center-embedded tree structure, but poorly on its middle levels. Finally, probing the internal states of the model during the processing of sentences with nested tree structures, we found a complex encoding of grammatical agreement information (e.g. grammatical number), in which all the information for multiple words nouns was carried by a single unit. Taken together, these results indicate how neural networks may extract bounded nested tree structures, without learning a systematic recursive rule.


page 5

page 8

page 16

page 17

page 19

page 21

page 23

page 25


Neural Tree Indexers for Text Understanding

Recurrent neural networks (RNNs) process input text sequentially and mod...

Colorless green recurrent networks dream hierarchically

Recurrent neural networks (RNNs) have achieved impressive results in a v...

Distinct patterns of syntactic agreement errors in recurrent networks and humans

Determining the correct form of a verb in context requires an understand...

Exploring the Syntactic Abilities of RNNs with Multi-task Learning

Recent work has explored the syntactic abilities of RNNs using the subje...

Causal Transformers Perform Below Chance on Recursive Nested Constructions, Unlike Humans

Recursive processing is considered a hallmark of human linguistic abilit...

Visualisation and 'diagnostic classifiers' reveal how recurrent and recursive neural networks process hierarchical structure

We investigate how neural networks can learn and process languages with ...

Influence Paths for Characterizing Subject-Verb Number Agreement in LSTM Language Models

LSTM-based recurrent neural networks are the state-of-the-art for many n...

Please sign up or login with your details

Forgot password? Click here to reset