Structured Prompt Tuning

by   Chi-Liang Liu, et al.
National Taiwan University

We propose structured prompt tuning, a simple and effective method to improve prompt tuning. Instead of prepending a sequence of tunable embeddings to the input, we generate the soft prompt embeddings through a hypernetwork. Our approach subsumes the standard prompt tuning, allows more flexibility in model design and can be applied to both single-task and multi-task training settings. Empirically, structured prompt tuning shows a gain of +1.21.5 points on the GLUE benchmark and is less sensitive to the change of learning rate, compared to standard prompt tuning.


page 1

page 2

page 3

page 4


Residual Prompt Tuning: Improving Prompt Tuning with Residual Reparameterization

Prompt tuning is one of the successful approaches for parameter-efficien...

Non-Hermitian Physics-Inspired Voltage-Controlled Oscillators with Resistive Tuning

This paper presents a non-Hermitian physics-inspired voltage-controlled ...

Structured Differential Learning for Automatic Threshold Setting

We introduce a technique that can automatically tune the parameters of a...

A Simple Dynamic Learning Rate Tuning Algorithm For Automated Training of DNNs

Training neural networks on image datasets generally require extensive e...

Learning to Initialize: Can Meta Learning Improve Cross-task Generalization in Prompt Tuning?

Prompt tuning (PT) which only tunes the embeddings of an additional sequ...

Global Prompt Cell: A Portable Control Module for Effective Prompt

As a novel approach to tuning pre-trained models, prompt tuning involves...

Parser Training with Heterogeneous Treebanks

How to make the most of multiple heterogeneous treebanks when training a...

Please sign up or login with your details

Forgot password? Click here to reset