Evaluating the Evaluators: Are Current Few-Shot Learning Benchmarks Fit for Purpose?

07/06/2023
by   Luísa Shimabucoro, et al.
0

Numerous benchmarks for Few-Shot Learning have been proposed in the last decade. However all of these benchmarks focus on performance averaged over many tasks, and the question of how to reliably evaluate and tune models trained for individual tasks in this regime has not been addressed. This paper presents the first investigation into task-level evaluation – a fundamental step when deploying a model. We measure the accuracy of performance estimators in the few-shot setting, consider strategies for model selection, and examine the reasons for the failure of evaluators usually thought of as being robust. We conclude that cross-validation with a low number of folds is the best choice for directly estimating the performance of a model, whereas using bootstrapping or cross validation with a large number of folds is better for model selection purposes. Overall, we find that existing benchmarks for few-shot learning are not designed in such a way that one can get a reliable picture of how effectively methods can be used on individual tasks.

READ FULL TEXT

page 7

page 8

research
05/24/2021

True Few-Shot Learning with Language Models

Pretrained language models (LMs) perform well on many tasks even when le...
research
03/15/2023

Distribution-free Deviation Bounds of Learning via Model Selection with Cross-validation Risk Estimation

Cross-validation techniques for risk estimation and model selection are ...
research
11/04/2021

CLUES: Few-Shot Learning Evaluation in Natural Language Understanding

Most recent progress in natural language understanding (NLU) has been dr...
research
11/27/2014

Convex Techniques for Model Selection

We develop a robust convex algorithm to select the regularization parame...
research
04/29/2022

Flamingo: a Visual Language Model for Few-Shot Learning

Building models that can be rapidly adapted to numerous tasks using only...
research
11/12/2021

Scalable Diverse Model Selection for Accessible Transfer Learning

With the preponderance of pretrained deep learning models available off-...
research
12/13/2022

A Statistical Model for Predicting Generalization in Few-Shot Classification

The estimation of the generalization error of classifiers often relies o...

Please sign up or login with your details

Forgot password? Click here to reset