Language models show human-like content effects on reasoning

by   Ishita Dasgupta, et al.

Abstract reasoning is a key ability for an intelligent system. Large language models achieve above-chance performance on abstract reasoning tasks, but exhibit many imperfections. However, human abstract reasoning is also imperfect, and depends on our knowledge and beliefs about the content of the reasoning problem. For example, humans reason much more reliably about logical rules that are grounded in everyday situations than arbitrary rules about abstract attributes. The training experiences of language models similarly endow them with prior expectations that reflect human knowledge and beliefs. We therefore hypothesized that language models would show human-like content effects on abstract reasoning problems. We explored this hypothesis across three logical reasoning tasks: natural language inference, judging the logical validity of syllogisms, and the Wason selection task (Wason, 1968). We find that state of the art large language models (with 7 or 70 billion parameters; Hoffman et al., 2022) reflect many of the same patterns observed in humans across these tasks – like humans, models reason more effectively about believable situations than unrealistic or abstract ones. Our findings have implications for understanding both these cognitive effects, and the factors that contribute to language model performance.


page 1

page 6

page 10

page 24

page 26

page 27

page 28


Evaluating Large Language Models with NeuBAROCO: Syllogistic Reasoning Ability and Human-like Biases

This paper investigates whether current large language models exhibit bi...

Large Language Models Are Not Abstract Reasoners

Large Language Models have shown tremendous performance on a large varie...

ANALOGYKB: Unlocking Analogical Reasoning of Language Models with A Million-scale Knowledge Base

Analogical reasoning is a fundamental cognitive ability of humans. Howev...

Case-Based Reasoning with Language Models for Classification of Logical Fallacies

The ease and the speed of spreading misinformation and propaganda on the...

Mind's Eye: Grounded Language Model Reasoning through Simulation

Successful and effective communication between humans and AI relies on a...

Invalid Logic, Equivalent Gains: The Bizarreness of Reasoning in Language Model Prompting

Language models can be prompted to reason through problems in a manner t...

Please sign up or login with your details

Forgot password? Click here to reset