Few-shot learning for sentence pair classification and its applications in software engineering
Few-shot learning-the ability to train models with access to limited data-has become increasingly popular in the natural language processing (NLP) domain, as large language models such as GPT and T0 have been empirically shown to achieve high performance in numerous tasks with access to just a handful of labeled examples. Smaller language models such as BERT and its variants have also been shown to achieve strong performance with just a handful of labeled examples when combined with few-shot learning algorithms like pattern-exploiting training (PET) and SetFit. The focus of this work is to investigate the performance of alternative few-shot learning approaches with BERT-based models. Specifically, vanilla fine-tuning, PET and SetFit are compared for numerous BERT-based checkpoints over an array of training set sizes. To facilitate this investigation, applications of few-shot learning are considered in software engineering. For each task, high-performance techniques and their associated model checkpoints are identified through detailed empirical analysis. Our results establish PET as a strong few-shot learning approach, and our analysis shows that with just a few hundred labeled examples it can achieve performance near that of fine-tuning on full-sized data sets.
READ FULL TEXT