Teaching language models to support answers with verified quotes

03/21/2022
by   Jacob Menick, et al.
0

Recent large language models often answer factual questions correctly. But users can't trust any given claim a model makes without fact-checking, because language models can hallucinate convincing nonsense. In this work we use reinforcement learning from human preferences (RLHP) to train "open-book" QA models that generate answers whilst also citing specific evidence for their claims, which aids in the appraisal of correctness. Supporting evidence is drawn from multiple documents found via a search engine, or from a single user-provided document. Our 280 billion parameter model, GopherCite, is able to produce answers with high quality supporting evidence and abstain from answering when unsure. We measure the performance of GopherCite by conducting human evaluation of answers to questions in a subset of the NaturalQuestions and ELI5 datasets. The model's response is found to be high-quality 80% of the time on this Natural Questions subset, and 67% of the time on the ELI5 subset. Abstaining from the third of questions for which it is most unsure improves performance to 90% and 80% respectively, approaching human baselines. However, analysis on the adversarial TruthfulQA dataset shows why citation is only one part of an overall strategy for safety and trustworthiness: not all claims supported by evidence are true.

READ FULL TEXT

page 1

page 16

page 32

page 40

research
09/14/2023

ExpertQA: Expert-Curated Questions and Attributed Answers

As language models are adapted by a more sophisticated and diverse set o...
research
09/17/2023

ChatGPT Hallucinates when Attributing Answers

Can ChatGPT provide evidence to support its answers? Does the evidence i...
research
05/24/2023

Mastering the ABCDs of Complex Questions: Answer-Based Claim Decomposition for Fine-grained Self-Evaluation

When answering complex questions, large language models (LLMs) may produ...
research
05/23/2023

Knowledge of Knowledge: Exploring Known-Unknowns Uncertainty with Large Language Models

This paper investigates the capabilities of Large Language Models (LLMs)...
research
07/11/2022

Language Models (Mostly) Know What They Know

We study whether language models can evaluate the validity of their own ...
research
05/24/2023

Enabling Large Language Models to Generate Text with Citations

Large language models (LLMs) have emerged as a widely-used tool for info...
research
09/28/2022

Who is GPT-3? An Exploration of Personality, Values and Demographics

Language models such as GPT-3 have caused a furore in the research commu...

Please sign up or login with your details

Forgot password? Click here to reset