Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm

by   Laria Reynolds, et al.

Prevailing methods for mapping large generative language models to supervised tasks may fail to sufficiently probe models' novel capabilities. Using GPT-3 as a case study, we show that 0-shot prompts can significantly outperform few-shot prompts. We suggest that the function of few-shot examples in these cases is better described as locating an already learned task rather than meta-learning. This analysis motivates rethinking the role of prompts in controlling and evaluating powerful language models. In this work, we discuss methods of prompt programming, emphasizing the usefulness of considering prompts through the lens of natural language. We explore techniques for exploiting the capacity of narratives and cultural anchors to encode nuanced intentions and techniques for encouraging deconstruction of a problem into components before producing a verdict. Informed by this more encompassing theory of prompt programming, we also introduce the idea of a metaprompt that seeds the model to generate its own natural language prompts for a range of tasks. Finally, we discuss how these more general methods of interacting with language models can be incorporated into existing and future benchmarks and practical applications.


page 1

page 2

page 3

page 4


Constrained Language Models Yield Few-Shot Semantic Parsers

We explore the use of large pretrained language models as few-shot seman...

Bidirectional Language Models Are Also Few-shot Learners

Large language models such as GPT-3 (Brown et al., 2020) can perform arb...

Can language models handle recursively nested grammatical structures? A case study on comparing models and humans

How should we compare the capabilities of language models and humans? He...

Language Models as Few-Shot Learner for Task-Oriented Dialogue Systems

Task-oriented dialogue systems use four connected modules, namely, Natur...

Emergent autonomous scientific research capabilities of large language models

Transformer-based large language models are rapidly advancing in the fie...

Symbolic and Language Agnostic Large Language Models

We argue that the relative success of large language models (LLMs) is no...

Structured Like a Language Model: Analysing AI as an Automated Subject

Drawing from the resources of psychoanalysis and critical media studies,...

Please sign up or login with your details

Forgot password? Click here to reset