Fine-Tuning Language Models from Human Preferences

09/18/2019
by   Daniel M. Ziegler, et al.
0

Reward learning enables the application of reinforcement learning (RL) to tasks where reward is defined by human judgment, building a model of reward by asking humans questions. Most work on reward learning has used simulated environments, but complex information about values is often expressed in natural language, and we believe reward learning for language is a key to making RL practical and safe for real-world tasks. In this paper, we build on advances in generative pretraining of language models to apply reward learning to four natural language tasks: continuing text with positive sentiment or physically descriptive language, and summarization tasks on the TL;DR and CNN/Daily Mail datasets. For stylistic continuation we achieve good results with only 5,000 comparisons evaluated by humans. For summarization, models trained with 60,000 comparisons copy whole sentences from the input but skip irrelevant preamble; this leads to reasonable ROUGE scores and very good performance according to our human labelers, but may be exploiting the fact that labelers rely on simple heuristics.

READ FULL TEXT

page 8

page 10

research
05/17/2023

SLiC-HF: Sequence Likelihood Calibration with Human Feedback

Learning from human feedback has been shown to be effective at aligning ...
research
09/02/2020

Learning to summarize from human feedback

As language models become more powerful, training and evaluation are inc...
research
04/26/2023

Exploring the Curious Case of Code Prompts

Recent work has shown that prompting language models with code-like repr...
research
09/18/2023

Stabilizing RLHF through Advantage Model and Selective Rehearsal

Large Language Models (LLMs) have revolutionized natural language proces...
research
08/31/2019

Deep Reinforcement Learning with Distributional Semantic Rewards for Abstractive Summarization

Deep reinforcement learning (RL) has been a commonly-used strategy for t...
research
03/14/2022

Uncertainty Estimation for Language Reward Models

Language models can learn a range of capabilities from unsupervised trai...
research
05/23/2022

RL with KL penalties is better viewed as Bayesian inference

Reinforcement learning (RL) is frequently employed in fine-tuning large ...

Please sign up or login with your details

Forgot password? Click here to reset