Bayes Factors can only Quantify Evidence w.r.t. Sets of Parameters, not w.r.t. (Prior) Distributions on the Parameter
Bayes factors are characterized by both the powerful mathematical framework of Bayesian statistics and the useful interpretation as evidence quantification. Former requires a parameter distribution that changes by seeing the data, latter requires two fixed hypotheses w.r.t. which the evidence quantification refers to. Naturally, these fixed hypotheses must not change by seeing the data, only their credibility should! Yet, it is exactly such a change of the hypotheses themselves (not only their credibility) that occurs by seeing the data, if their content is represented by parameter distributions (a recent trend in the context of Bayes factors for about one decade), rendering a correct interpretation of the Bayes factor rather useless. Instead, this paper argues that the inferential foundation of Bayes factors can only be maintained, if hypotheses are sets of parameters, not parameter distributions. In addition, particular attention has been paid to providing an explicit terminology of the big picture of statistical inference in the context of Bayes factors as well as to the distinction between knowledge (formalized by the prior distribution and being allowed to change) and theoretical positions (formalized as hypotheses and required to stay fixed) of the phenomenon of interest.
READ FULL TEXT