On the Dark Side of Calibration for Modern Neural Networks

by   Aditya Singh, et al.

Modern neural networks are highly uncalibrated. It poses a significant challenge for safety-critical systems to utilise deep neural networks (DNNs), reliably. Many recently proposed approaches have demonstrated substantial progress in improving DNN calibration. However, they hardly touch upon refinement, which historically has been an essential aspect of calibration. Refinement indicates separability of a network's correct and incorrect predictions. This paper presents a theoretically and empirically supported exposition for reviewing a model's calibration and refinement. Firstly, we show the breakdown of expected calibration error (ECE), into predicted confidence and refinement. Connecting with this result, we highlight that regularisation based calibration only focuses on naively reducing a model's confidence. This logically has a severe downside to a model's refinement. We support our claims through rigorous empirical evaluations of many state of the art calibration approaches on standard datasets. We find that many calibration approaches with the likes of label smoothing, mixup etc. lower the utility of a DNN by degrading its refinement. Even under natural data shift, this calibration-refinement trade-off holds for the majority of calibration methods. These findings call for an urgent retrospective into some popular pathways taken for modern DNN calibration.


page 1

page 2

page 3

page 4


Calibration of Neural Networks

Neural networks solving real-world problems are often required not only ...

A Stitch in Time Saves Nine: A Train-Time Regularizing Loss for Improved Neural Network Calibration

Deep Neural Networks ( DNN s) are known to make overconfident mistakes, ...

Meta-Calibration: Meta-Learning of Model Calibration Using Differentiable Expected Calibration Error

Calibration of neural networks is a topical problem that is becoming inc...

Evaluating Uncertainty Calibration for Open-Set Recognition

Despite achieving enormous success in predictive accuracy for visual cla...

Annealing Double-Head: An Architecture for Online Calibration of Deep Neural Networks

Model calibration, which is concerned with how frequently the model pred...

Improving Calibration in Mixup-trained Deep Neural Networks through Confidence-Based Loss Functions

Deep Neural Networks (DNN) represent the state of the art in many tasks....

Calibrating a Deep Neural Network with Its Predecessors

Confidence calibration - the process to calibrate the output probability...

Please sign up or login with your details

Forgot password? Click here to reset