Useful Confidence Measures: Beyond the Max Score

10/25/2022
by   Gal Yona, et al.
0

An important component in deploying machine learning (ML) in safety-critic applications is having a reliable measure of confidence in the ML model's predictions. For a classifier f producing a probability vector f(x) over the candidate classes, the confidence is typically taken to be max_i f(x)_i. This approach is potentially limited, as it disregards the rest of the probability vector. In this work, we derive several confidence measures that depend on information beyond the maximum score, such as margin-based and entropy-based measures, and empirically evaluate their usefulness, focusing on NLP tasks with distribution shifts and Transformer-based models. We show that when models are evaluated on the out-of-distribution data “out of the box”, using only the maximum score to inform the confidence measure is highly suboptimal. In the post-processing regime (where the scores of f can be improved using additional in-distribution held-out data), this remains true, albeit less significant. Overall, our results suggest that entropy-based confidence is a surprisingly useful measure.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset