Measuring Progress on Scalable Oversight for Large Language Models

by   Samuel R. Bowman, et al.

Developing safe and useful general-purpose AI systems will require us to make progress on scalable oversight: the problem of supervising systems that potentially outperform us on most skills relevant to the task at hand. Empirical work on this problem is not straightforward, since we do not yet have systems that broadly exceed our abilities. This paper discusses one of the major ways we think about this problem, with a focus on ways it can be studied empirically. We first present an experimental design centered on tasks for which human specialists succeed but unaided humans and current general AI systems fail. We then present a proof-of-concept experiment meant to demonstrate a key feature of this experimental design and show its viability with two question-answering tasks: MMLU and time-limited QuALITY. On these tasks, we find that human participants who interact with an unreliable large-language-model dialog assistant through chat – a trivial baseline strategy for scalable oversight – substantially outperform both the model alone and their own unaided performance. These results are an encouraging sign that scalable oversight will be tractable to study with present models and bolster recent findings that large language models can productively assist humans with difficult tasks.


page 1

page 2

page 3

page 4


Language models are better than humans at next-token prediction

Current language models are considered to have sub-human capabilities at...

Large Language Models as Corporate Lobbyists

We demonstrate a proof-of-concept of a large language model conducting c...

Supporting Human-AI Collaboration in Auditing LLMs with LLMs

Large language models are becoming increasingly pervasive and ubiquitous...

Understanding the Effectiveness of Very Large Language Models on Dialog Evaluation

Language models have steadily increased in size over the past few years....

Frugal Prompting for Dialog Models

The use of large language models (LLMs) in natural language processing (...

OMNI: Open-endedness via Models of human Notions of Interestingness

Open-ended algorithms aim to learn new, interesting behaviors forever. T...

Are Emergent Abilities of Large Language Models a Mirage?

Recent work claims that large language models display emergent abilities...

Please sign up or login with your details

Forgot password? Click here to reset