Synthetic Prompting: Generating Chain-of-Thought Demonstrations for Large Language Models

02/01/2023
by   Zhihong Shao, et al.
0

Large language models can perform various reasoning tasks by using chain-of-thought prompting, which guides them to find answers through step-by-step demonstrations. However, the quality of the prompts depends on the demonstrations given to the models, and creating many of them by hand is costly. We introduce Synthetic prompting, a method that leverages a few handcrafted examples to prompt the model to generate more examples by itself, and selects effective demonstrations to elicit better reasoning. Our method alternates between a backward and forward process to generate new examples. The backward process generates a question that match a sampled reasoning chain, so that the question is solvable and clear. The forward process produces a more detailed reasoning chain for the question, improving the quality of the example. We evaluate our method on numerical, symbolic, and algorithmic reasoning tasks, and show that it outperforms existing prompting techniques.

READ FULL TEXT
research
10/07/2022

Automatic Chain of Thought Prompting in Large Language Models

Large language models (LLMs) can perform complex reasoning by generating...
research
05/24/2023

Towards Revealing the Mystery behind Chain of Thought: A Theoretical Perspective

Recent studies have discovered that Chain-of-Thought prompting (CoT) can...
research
12/16/2022

The Impact of Symbolic Representations on In-context Learning for Few-shot Reasoning

Pre-trained language models (LMs) have shown remarkable reasoning perfor...
research
04/23/2023

Enhancing Chain-of-Thoughts Prompting with Iterative Bootstrapping in Large Language Models

Large language models (LLMs) can achieve highly effective performance on...
research
05/26/2023

Demo2Code: From Summarizing Demonstrations to Synthesizing Code via Extended Chain-of-Thought

Language instructions and demonstrations are two natural ways for users ...
research
08/15/2023

Forward-Backward Reasoning in Large Language Models for Verification

Chain-of-Though (CoT) prompting has shown promising performance in vario...
research
05/23/2023

Self-Polish: Enhance Reasoning in Large Language Models via Problem Refinement

Prompting methods such as Chain-of-Thought (CoT) have shed new light on ...

Please sign up or login with your details

Forgot password? Click here to reset