MetaPrompting: Learning to Learn Better Prompts

09/23/2022
by   Yutai Hou, et al.
0

Prompting method is regarded as one of the crucial progress for few-shot nature language processing. Recent research on prompting moves from discrete tokens based “hard prompts” to continuous “soft prompts”, which employ learnable vectors as pseudo prompt tokens and achieve better performance. Though showing promising prospects, these soft-prompting methods are observed to rely heavily on good initialization to take effect. Unfortunately, obtaining a perfect initialization for soft prompts requires understanding of inner language models working and elaborate design, which is no easy task and has to restart from scratch for each new task. To remedy this, we propose a generalized soft prompting method called MetaPrompting, which adopts the well-recognized model-agnostic meta-learning algorithm to automatically find better prompt initialization that facilitates fast adaptation to new prompting tasks.Extensive experiments show MetaPrompting tackles soft prompt initialization problem and brings significant improvement on four different datasets (over 6 points improvement in accuracy for 1-shot setting), achieving new state-of-the-art performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/12/2023

Gradient-Regulated Meta-Prompt Learning for Generalizable Vision-Language Models

Prompt tuning, a recently emerging paradigm, enables the powerful vision...
research
03/22/2023

Meta-augmented Prompt Tuning for Better Few-shot Learning

Prompt tuning is a parameter-efficient method, which freezes all PLM par...
research
02/18/2023

Scalable Prompt Generation for Semi-supervised Learning with Language Models

Prompt-based learning methods in semi-supervised learning (SSL) settings...
research
06/01/2023

Effective Structured Prompting by Meta-Learning and Representative Verbalizer

Prompt tuning for pre-trained masked language models (MLM) has shown pro...
research
02/16/2023

Learning to Initialize: Can Meta Learning Improve Cross-task Generalization in Prompt Tuning?

Prompt tuning (PT) which only tunes the embeddings of an additional sequ...
research
10/31/2020

Meta-Learning with Adaptive Hyperparameters

Despite its popularity, several recent works question the effectiveness ...
research
05/24/2023

Towards Reliable Misinformation Mitigation: Generalization, Uncertainty, and GPT-4

Misinformation poses a critical societal challenge, and current approach...

Please sign up or login with your details

Forgot password? Click here to reset