Context Generation Improves Open Domain Question Answering

10/12/2022
by   Dan Su, et al.
12

Closed-book question answering (QA) requires a model to directly answer an open-domain question without access to any external knowledge. Prior work on closed-book QA either directly finetunes or prompts a pretrained language model (LM) to leverage the stored knowledge. However, they do not fully exploit the parameterized knowledge. To address this issue, we propose a two-stage, closed-book QA framework which employs a coarse-to-fine approach to extract relevant knowledge and answer a question. Our approach first generates a related context for a given question by prompting a pretrained LM. We then prompt the same LM for answer prediction using the generated context and the question. Additionally, to eliminate failure caused by context uncertainty, we marginalize over generated contexts. Experimental results on three QA benchmarks show that our method significantly outperforms previous closed-book QA methods (e.g. exact matching 68.6 methods that exploit external knowledge sources (e.g. 68.6 method is able to better exploit the stored knowledge in pretrained LMs without adding extra learnable parameters or needing finetuning, and paves the way for hybrid models that integrate pretrained LMs with external knowledge.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/03/2021

Can Generative Pre-trained Language Models Serve as Knowledge Bases for Closed-book QA?

Recent work has investigated the interesting question using pre-trained ...
research
11/23/2022

Can Open-Domain QA Reader Utilize External Knowledge Efficiently like Humans?

Recent state-of-the-art open-domain QA models are typically based on a t...
research
05/18/2023

Writing your own book: A method for going from closed to open book QA to improve robustness and performance of smaller LLMs

We introduce two novel methods, Tree-Search and Self-contextualizing QA,...
research
10/13/2022

Closed-book Question Generation via Contrastive Learning

Question Generation (QG) is a fundamental NLP task for many downstream a...
research
06/24/2022

OPERA: Harmonizing Task-Oriented Dialogs and Information Seeking Experience

Existing studies in conversational AI mostly treat task-oriented dialog ...
research
07/31/2022

Neural Knowledge Bank for Pretrained Transformers

The ability of pretrained Transformers to remember factual knowledge is ...
research
12/31/2020

Studying Strategically: Learning to Mask for Closed-book QA

Closed-book question-answering (QA) is a challenging task that requires ...

Please sign up or login with your details

Forgot password? Click here to reset