Can Calibration Improve Sample Prioritization?

10/12/2022
by   Ganesh Tata, et al.
5

Calibration can reduce overconfident predictions of deep neural networks, but can calibration also accelerate training by selecting the right samples? In this paper, we show that it can. We study the effect of popular calibration techniques in selecting better subsets of samples during training (also called sample prioritization) and observe that calibration can improve the quality of subsets, reduce the number of examples per epoch (by at least 70 thereby speed up the overall training process. We further study the effect of using calibrated pre-trained models coupled with calibration during training to guide sample prioritization, which again seems to improve the quality of samples selected.

READ FULL TEXT
research
10/31/2022

A Close Look into the Calibration of Pre-trained Language Models

Pre-trained language models (PLMs) achieve remarkable performance on man...
research
07/08/2020

Diverse Ensembles Improve Calibration

Modern deep neural networks can produce badly calibrated predictions, es...
research
04/20/2023

Learning Sample Difficulty from Pre-trained Models for Reliable Prediction

Large-scale pre-trained models have achieved remarkable success in a var...
research
03/19/2015

Neural Network-Based Active Learning in Multivariate Calibration

In chemometrics, data from infrared or near-infrared (NIR) spectroscopy ...
research
08/23/2023

RankMixup: Ranking-Based Mixup Training for Network Calibration

Network calibration aims to accurately estimate the level of confidences...
research
02/04/2015

Artificial neural networks in calibration of nonlinear mechanical models

Rapid development in numerical modelling of materials and the complexity...
research
11/17/2017

Calibration of Distributionally Robust Empirical Optimization Models

In this paper, we study the out-of-sample properties of robust empirical...

Please sign up or login with your details

Forgot password? Click here to reset