How does GPT-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model

by   Michael Hanna, et al.
University of Southern California
University of Amsterdam

Pre-trained language models can be surprisingly adept at tasks they were not explicitly trained on, but how they implement these capabilities is poorly understood. In this paper, we investigate the basic mathematical abilities often acquired by pre-trained language models. Concretely, we use mechanistic interpretability techniques to explain the (limited) mathematical abilities of GPT-2 small. As a case study, we examine its ability to take in sentences such as "The war lasted from the year 1732 to the year 17", and predict valid two-digit end years (years > 32). We first identify a circuit, a small subset of GPT-2 small's computational graph that computes this task's output. Then, we explain the role of each circuit component, showing that GPT-2 small's final multi-layer perceptrons boost the probability of end years greater than the start year. Finally, we show that our circuit generalizes to other tasks, playing a role in other greater-than scenarios.


page 14

page 17

page 19


SpeechBERT: Cross-Modal Pre-trained Language Model for End-to-end Spoken Question Answering

While end-to-end models for spoken language understanding tasks have bee...

Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners

Large-scale pre-trained language models have contributed significantly t...

On the Role of Pre-trained Language Models in Word Ordering: A Case Study with BART

Word ordering is a constrained language generation task taking unordered...

Honey, I Shrunk the Language: Language Model Behavior at Reduced Scale

In recent years, language models have drastically grown in size, and the...

Measuring and Improving BERT's Mathematical Abilities by Predicting the Order of Reasoning

Imagine you are in a supermarket. You have two bananas in your basket an...

What GPT Knows About Who is Who

Coreference resolution – which is a crucial task for understanding disco...

Are Emergent Abilities of Large Language Models a Mirage?

Recent work claims that large language models display emergent abilities...

Please sign up or login with your details

Forgot password? Click here to reset