Calibration Scoring Rules for Practical Prediction Training

08/22/2018
by   Spencer Greenberg, et al.
0

In situations where forecasters are scored on the quality of their probabilistic predictions, it is standard to use `proper' scoring rules to perform such scoring. These rules are desirable because they give forecasters no incentive to lie about their probabilistic beliefs. However, in the real world context of creating a training program designed to help people improve calibration through prediction practice, there are a variety of desirable traits for scoring rules that go beyond properness. These potentially may have a substantial impact on the user experience, usability of the program, or efficiency of learning. The space of proper scoring rules is too broad, in the sense that most proper scoring rules lack these other desirable properties. On the other hand, the space of proper scoring rules is potentially also too narrow, in the sense that we may sometimes choose to give up properness when it conflicts with other properties that are even more desirable from the point of view of usability and effective training. We introduce a class of scoring rules that we call `Practical' scoring rules, designed to be intuitive to users in the context of `right' vs. `wrong' probabilistic predictions. We also introduce two specific scoring rules for prediction intervals, the `Distance' and `Order of magnitude' rules. These rules are designed to satisfy a variety of properties that, based on user testing, we believe are desirable for applied calibration training.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset