Enhancing Chain-of-Thoughts Prompting with Iterative Bootstrapping in Large Language Models

04/23/2023
by   Jiashuo Sun, et al.
0

Large language models (LLMs) can achieve highly effective performance on various reasoning tasks by incorporating step-by-step chain-of-thought (CoT) prompting as demonstrations. However, the reasoning chains of demonstrations generated by LLMs are prone to errors, which can subsequently lead to incorrect reasoning during inference. Furthermore, inappropriate exemplars (overly simplistic or complex), can affect overall performance among varying levels of difficulty. We introduce Iter-CoT (Iterative bootstrapping in Chain-of-Thoughts Prompting), an iterative bootstrapping approach for selecting exemplars and generating reasoning chains. By utilizing iterative bootstrapping, our approach enables LLMs to autonomously rectify errors, resulting in more precise and comprehensive reasoning chains. Simultaneously, our approach selects challenging yet answerable questions accompanied by reasoning chains as exemplars with a moderate level of difficulty, which enhances the LLMs' generalizability across varying levels of difficulty. Experimental results indicate that Iter-CoT exhibits superiority, achieving competitive performance across three distinct reasoning tasks on eleven datasets.

READ FULL TEXT

page 21

page 28

page 29

research
10/07/2022

Automatic Chain of Thought Prompting in Large Language Models

Large language models (LLMs) can perform complex reasoning by generating...
research
02/01/2023

Synthetic Prompting: Generating Chain-of-Thought Demonstrations for Large Language Models

Large language models can perform various reasoning tasks by using chain...
research
10/03/2022

Complexity-Based Prompting for Multi-Step Reasoning

We study the task of prompting large-scale language models to perform mu...
research
05/05/2023

Verify-and-Edit: A Knowledge-Enhanced Chain-of-Thought Framework

As large language models (LLMs) have become the norm in NLP, demonstrati...
research
05/24/2023

Calc-X: Enriching Arithmetical Chain-of-Thoughts Datasets by Interaction with Symbolic Systems

This report overviews our ongoing work in enriching chain-of-thoughts da...
research
10/04/2021

AI Chains: Transparent and Controllable Human-AI Interaction by Chaining Large Language Model Prompts

Although large language models (LLMs) have demonstrated impressive poten...
research
03/13/2022

PromptChainer: Chaining Large Language Model Prompts through Visual Programming

While LLMs can effectively help prototype single ML functionalities, man...

Please sign up or login with your details

Forgot password? Click here to reset