Robust and Reproducible Model Selection Using Bagged Posteriors

by   Jonathan H. Huggins, et al.

Bayesian model selection is premised on the assumption that the data are generated from one of the postulated models, however, in many applications, all of these models are incorrect. When two or more models provide a nearly equally good fit to the data, Bayesian model selection can be highly unstable, potentially leading to self-contradictory findings. In this paper, we explore using bagging on the posterior distribution ("BayesBag") when performing model selection – that is, averaging the posterior model probabilities over many bootstrapped datasets. We provide theoretical results characterizing the asymptotic behavior of the standard posterior and the BayesBag posterior under misspecification, in the model selection setting. We empirically assess the BayesBag approach on synthetic and real-world data in (i) feature selection for linear regression and (ii) phylogenetic tree reconstruction. Our theory and experiments show that in the presence of misspecification, BayesBag provides (a) greater reproducibility and (b) greater accuracy in selecting the correct model, compared to the standard Bayesian posterior; on the other hand, under correct specification, BayesBag is slightly more conservative than the standard posterior. Overall, our results demonstrate that BayesBag provides an easy-to-use and widely applicable approach that improves upon standard Bayesian model selection by making it more stable and reproducible.


page 1

page 2

page 3

page 4


Using bagged posteriors for robust inference and model criticism

Standard Bayesian inference is known to be sensitive to model misspecifi...

The good, the bad, and the ugly: Bayesian model selection produces spurious posterior probabilities for phylogenetic trees

The Bayesian method is noted to produce spuriously high posterior probab...

Probably the Best Itemsets

One of the main current challenges in itemset mining is to discover a sm...

Episodic memory for continual model learning

Both the human brain and artificial learning agents operating in real-wo...

Rapidly Mixing Multiple-try Metropolis Algorithms for Model Selection Problems

The multiple-try Metropolis (MTM) algorithm is an extension of the Metro...

Robust and Parallel Bayesian Model Selection

Effective and accurate model selection is an important problem in modern...

When are Bayesian model probabilities overconfident?

Bayesian model comparison is often based on the posterior distribution o...

Please sign up or login with your details

Forgot password? Click here to reset