Decoder Tuning: Efficient Language Understanding as Decoding

12/16/2022
by   Ganqu Cui, et al.
0

With the evergrowing sizes of pre-trained models (PTMs), it has been an emerging practice to only provide the inference APIs for users, namely model-as-a-service (MaaS) setting. To adapt PTMs with model parameters frozen, most current approaches focus on the input side, seeking for powerful prompts to stimulate models for correct answers. However, we argue that input-side adaptation could be arduous due to the lack of gradient signals and they usually require thousands of API queries, resulting in high computation and time costs. In light of this, we present Decoder Tuning (DecT), which in contrast optimizes task-specific decoder networks on the output side. Specifically, DecT first extracts prompt-stimulated output scores for initial predictions. On top of that, we train an additional decoder network on the output representations to incorporate posterior data knowledge. By gradient-based optimization, DecT can be trained within several seconds and requires only one PTM query per sample. Empirically, we conduct extensive natural language understanding experiments and show that DecT significantly outperforms state-of-the-art algorithms with a 10^3× speed-up.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/05/2019

Deepening Hidden Representations from Pre-trained Language Models for Natural Language Understanding

Transformer-based pre-trained language models have proven to be effectiv...
research
04/09/2022

IDPG: An Instance-Dependent Prompt Generation Method

Prompt tuning is a new, efficient NLP transfer learning paradigm that ad...
research
12/27/2020

MeDAL: Medical Abbreviation Disambiguation Dataset for Natural Language Understanding Pretraining

One of the biggest challenges that prohibit the use of many current NLP ...
research
04/21/2020

DIET: Lightweight Language Understanding for Dialogue Systems

Large-scale pre-trained language models have shown impressive results on...
research
04/15/2022

MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation

Pre-trained language models have demonstrated superior performance in va...
research
05/23/2022

Vector-Quantized Input-Contextualized Soft Prompts for Natural Language Understanding

Prompt Tuning (PT) has been largely successful as a parameter-efficient ...
research
10/21/2022

Clip-Tuning: Towards Derivative-free Prompt Learning with a Mixture of Rewards

Derivative-free prompt learning has emerged as a lightweight alternative...

Please sign up or login with your details

Forgot password? Click here to reset