Rethinking with Retrieval: Faithful Large Language Model Inference

12/31/2022
by   Hangfeng He, et al.
0

Despite the success of large language models (LLMs) in various natural language processing (NLP) tasks, the stored knowledge in these models may inevitably be incomplete, out-of-date, or incorrect. This motivates the need to utilize external knowledge to assist LLMs. Unfortunately, current methods for incorporating external knowledge often require additional training or fine-tuning, which can be costly and may not be feasible for LLMs. To address this issue, we propose a novel post-processing approach, rethinking with retrieval (RR), which retrieves relevant external knowledge based on the decomposed reasoning steps obtained from the chain-of-thought (CoT) prompting. This lightweight approach does not require additional training or fine-tuning and is not limited by the input length of LLMs. We evaluate the effectiveness of RR through extensive experiments with GPT-3 on three complex reasoning tasks: commonsense reasoning, temporal reasoning, and tabular reasoning. Our results show that RR can produce more faithful explanations and improve the performance of LLMs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/24/2023

Unlocking Temporal Question Answering for Large Language Models Using Code Execution

Large language models (LLMs) have made significant progress in natural l...
research
12/20/2022

Interleaving Retrieval with Chain-of-Thought Reasoning for Knowledge-Intensive Multi-Step Questions

Recent work has shown that large language models are capable of generati...
research
04/16/2023

Chain of Thought Prompt Tuning in Vision Language Models

Language-Image Pre-training has demonstrated promising results on zero-s...
research
05/09/2023

MoT: Pre-thinking and Recalling Enable ChatGPT to Self-Improve with Memory-of-Thoughts

Large Language Models have shown impressive abilities on various tasks. ...
research
09/11/2023

DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning

Prompt tuning (PT), where a small amount of trainable soft (continuous) ...
research
09/04/2023

Prompting or Fine-tuning? A Comparative Study of Large Language Models for Taxonomy Construction

Taxonomies represent hierarchical relations between entities, frequently...
research
05/17/2023

Chain-of-Symbol Prompting Elicits Planning in Large Langauge Models

In this paper, we take the initiative to investigate the performance of ...

Please sign up or login with your details

Forgot password? Click here to reset