Exploiting Cloze Questions for Few-Shot Text Classification and Natural Language Inference
Some NLP tasks can be solved in a fully unsupervised fashion by providing a pretrained language model with "task descriptions" in natural language (e.g., Radford et al., 2019). While this approach underperforms its supervised counterpart, we show in this work that the two ideas can be combined: We introduce Pattern-Exploiting Training (PET), a semi-supervised training procedure that reformulates input examples as cloze-style phrases which help the language model understand the given task. Theses phrases are then used to assign soft labels to a large set of unlabeled examples. Finally, regular supervised training is performed on the resulting training set. On several tasks, we show that PET outperforms both supervised training and unsupervised approaches in low-resource settings by a large margin.
READ FULL TEXT