Large Language Models are few(1)-shot Table Reasoners

10/13/2022
by   Wenhu Chen, et al.
0

Recent literature has shown that large language models (LLMs) are generally excellent few-shot reasoners to solve text reasoning tasks. However, the capability of LLMs on table reasoning tasks is yet to be explored. In this paper, we aim at understanding how well LLMs can perform on these table tasks with few-shot in-context learning. Specifically, we evaluate LLMs on popular table QA and fact verification datasets like WikiTableQuestion, FetaQA, TabFact, and FEVEROUS and found that LLMs are really competent at complex reasoning over table structures. When combined with `chain of thoughts' prompting, GPT-3 is able to achieve very strong performance with only a 1-shot demonstration. We further manually study the reasoning chains elicited from LLMs and found that these reasoning chains are highly consistent with the `ground truth' semantic form. We believe that our study opens new possibilities to employ LLMs on different table-based reasoning tasks under few-shot scenario.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset