Shepherd Pre-trained Language Models to Develop a Train of Thought: An Iterative Prompting Approach

03/16/2022
by   Boshi Wang, et al.
0

While Pre-trained Language Models (PLMs) internalize a great amount of world knowledge, they have been shown incapable of recalling these knowledge to solve tasks requiring complex multi-step inference procedures. Similar to how humans develop a "train of thought" for these tasks, how can we equip PLMs with such abilities? In this work, we explore an iterative prompting framework, a new prompting paradigm which progressively elicits relevant knowledge from PLMs for multi-step inference tasks. We identify key limitations of existing prompting methods, namely they are either restricted to queries with a single identifiable relation/predicate, or being agnostic to input contexts, which makes it difficult to capture variabilities across different inference steps. We propose an iterative context-aware prompter, which addresses these limitations by learning to dynamically synthesize prompts conditioned on the current step's contexts. Experiments on three datasets involving multi-step inference show the effectiveness of the iterative scheme and our proposed prompter design.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset