Using Language Models For Knowledge Acquisition in Natural Language Reasoning Problems

04/04/2023
by   Fangzhen Lin, et al.
0

For a natural language problem that requires some non-trivial reasoning to solve, there are at least two ways to do it using a large language model (LLM). One is to ask it to solve it directly. The other is to use it to extract the facts from the problem text and then use a theorem prover to solve it. In this note, we compare the two methods using ChatGPT and GPT4 on a series of logic word puzzles, and conclude that the latter is the right approach.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset