Median of Means Principle for Bayesian Inference

by   Shunan Yao, et al.
University of Southern California

The topic of robustness is experiencing a resurgence of interest in the statistical and machine learning communities. In particular, robust algorithms making use of the so-called median of means estimator were shown to satisfy strong performance guarantees for many problems, including estimation of the mean, covariance structure as well as linear regression. In this work, we propose an extension of the median of means principle to the Bayesian framework, leading to the notion of the robust posterior distribution. In particular, we (a) quantify robustness of this posterior to outliers, (b) show that it satisfies a version of the Bernstein-von Mises theorem that connects Bayesian credible sets to the traditional confidence intervals, and (c) demonstrate that our approach performs well in applications.


page 1

page 2

page 3

page 4


A remark on "Robust machine learning by median-of-means"

We explore the recent results announced in "Robust machine learning by m...

Robust Kernel Density Estimation with Median-of-Means principle

In this paper, we introduce a robust nonparametric density estimator com...

Bayesian Inference on Multivariate Medians and Quantiles

In this paper, we consider Bayesian inference on a class of multivariate...

DeepMoM: Robust Deep Learning With Median-of-Means

Data used in deep learning is notoriously problematic. For example, data...

Efficient median of means estimator

The goal of this note is to present a modification of the popular median...

Median of means principle as a divide-and-conquer procedure for robustness, sub-sampling and hyper-parameters tuning

Many learning methods have poor risk estimates with large probability un...

β-Cores: Robust Large-Scale Bayesian Data Summarization in the Presence of Outliers

Modern machine learning applications should be able to address the intri...

Please sign up or login with your details

Forgot password? Click here to reset