Exploring the Integration Strategies of Retriever and Large Language Models

by   Ye Liu, et al.

The integration of retrieved passages and large language models (LLMs), such as ChatGPTs, has significantly contributed to improving open-domain question answering. However, there is still a lack of exploration regarding the optimal approach for incorporating retrieved passages into the answer generation process. This paper aims to fill this gap by investigating different methods of combining retrieved passages with LLMs to enhance answer generation. We begin by examining the limitations of a commonly-used concatenation approach. Surprisingly, this approach often results in generating "unknown" outputs, even when the correct document is among the top-k retrieved passages. To address this issue, we explore four alternative strategies for integrating the retrieved passages with the LLMs. These strategies include two single-round methods that utilize chain-of-thought reasoning and two multi-round strategies that incorporate feedback loops. Through comprehensive analyses and experiments, we provide insightful observations on how to effectively leverage retrieved passages to enhance the answer generation capability of LLMs.


page 1

page 2

page 3

page 4


Unlocking Temporal Question Answering for Large Language Models Using Code Execution

Large language models (LLMs) have made significant progress in natural l...

Search-in-the-Chain: Towards Accurate, Credible and Traceable Large Language Models for Knowledge-intensive Tasks

With the wide application of Large Language Models (LLMs) such as ChatGP...

Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding

This work aims at decreasing the end-to-end generation latency of large ...

Knowledge-Driven CoT: Exploring Faithful Reasoning in LLMs for Knowledge-intensive Question Answering

Equipped with Chain-of-Thought (CoT), Large language models (LLMs) have ...

Prompt Space Optimizing Few-shot Reasoning Success with Large Language Models

Prompt engineering is an essential technique for enhancing the abilities...

A BERT-based Distractor Generation Scheme with Multi-tasking and Negative Answer Training Strategies

In this paper, we investigate the following two limitations for the exis...

Leveraging Large Language Models (LLMs) for Process Mining (Technical Report)

This technical report describes the intersection of process mining and l...

Please sign up or login with your details

Forgot password? Click here to reset