Vector-Quantized Input-Contextualized Soft Prompts for Natural Language Understanding

05/23/2022
by   Rishabh Bhardwaj, et al.
5

Prompt Tuning (PT) has been largely successful as a parameter-efficient way of conditioning large-scale pre-trained language models towards a downstream task. More recently, soft prompt tuning has aimed to learn a fixed set of task-specific continuous vectors, i.e., soft tokens that remain static across the task samples. However, a fixed prompt may not generalize well to the diverse kinds of inputs the task comprises. With this motivation, we propose a novel way of prompting, Vector-quantized Input-contextualized Prompt Tuning or VIP. Essentially, VIP focuses on two aspects i) input-adaptation: input-specific contextualization of the soft tokens; and ii) vector quantization: we pass the tokens through a quantizer which effectively reduces representation variance by sampling prompts from a compact latent space. Over a wide range of natural language understanding tasks (SuperGLUE, QA, Relation Classification, NER, NLI), our proposed VIP framework beats the PT model by a margin of 1.19%. Additionally, on Out-of-domain QA and Multi-Task setups over 4 different tasks spanning over 12 domains, we find that VIP outperforms PT by 0.75%.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/11/2019

DoubleTransfer at MEDIQA 2019: Multi-Source Transfer Learning for Natural Language Understanding in the Medical Domain

This paper describes our competing system to enter the MEDIQA-2019 compe...
research
10/10/2022

Parameter-Efficient Tuning with Special Token Adaptation

Parameter-efficient tuning aims at updating only a small subset of param...
research
06/27/2023

Approximated Prompt Tuning for Vision-Language Pre-trained Models

Prompt tuning is a parameter-efficient way to deploy large-scale pre-tra...
research
09/27/2019

Reweighted Proximal Pruning for Large-Scale Language Representation

Recently, pre-trained language representation flourishes as the mainstay...
research
04/09/2022

IDPG: An Instance-Dependent Prompt Generation Method

Prompt tuning is a new, efficient NLP transfer learning paradigm that ad...
research
04/06/2022

Improving Multi-task Generalization Ability for Neural Text Matching via Prompt Learning

Text matching is a fundamental technique in both information retrieval a...
research
12/16/2022

Decoder Tuning: Efficient Language Understanding as Decoding

With the evergrowing sizes of pre-trained models (PTMs), it has been an ...

Please sign up or login with your details

Forgot password? Click here to reset