Examining the Emergence of Deductive Reasoning in Generative Language Models

05/31/2023
by   Peter Belcak, et al.
0

We conduct a preliminary inquiry into the ability of generative transformer models to deductively reason from premises provided. We observe notable differences in the performance of models coming from different training setups and find that the deductive reasoning ability increases with scale. Further, we discover that the performance generally does not decrease with the length of the deductive chain needed to reach the conclusion, with the exception of OpenAI GPT-3 and GPT-3.5 models. Our study considers a wide variety of transformer-decoder models, ranging from 117 million to 175 billion parameters in size.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset