Discrete and Soft Prompting for Multilingual Models

09/08/2021
by   Mengjie Zhao, et al.
12

It has been shown for English that discrete and soft prompting perform strongly in few-shot learning with pretrained language models (PLMs). In this paper, we show that discrete and soft prompting perform better than finetuning in multilingual cases: Crosslingual transfer and in-language training of multilingual natural language inference. For example, with 48 English training examples, finetuning obtains 33.74 surpassing the majority baseline (33.33 prompting outperform finetuning, achieving 36.43 demonstrate good performance of prompting with training data in multiple languages other than English.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset