Robust and Heavy-Tailed Mean Estimation Made Simple, via Regret Minimization

by   Samuel B. Hopkins, et al.

We study the problem of estimating the mean of a distribution in high dimensions when either the samples are adversarially corrupted or the distribution is heavy-tailed. Recent developments in robust statistics have established efficient and (near) optimal procedures for both settings. However, the algorithms developed on each side tend to be sophisticated and do not directly transfer to the other, with many of them having ad-hoc or complicated analyses. In this paper, we provide a meta-problem and a duality theorem that lead to a new unified view on robust and heavy-tailed mean estimation in high dimensions. We show that the meta-problem can be solved either by a variant of the Filter algorithm from the recent literature on robust estimation or by the quantum entropy scoring scheme (QUE), due to Dong, Hopkins and Li (NeurIPS '19). By leveraging our duality theorem, these results translate into simple and efficient algorithms for both robust and heavy-tailed settings. Furthermore, the QUE-based procedure has run-time that matches the fastest known algorithms on both fronts. Our analysis of Filter is through the classic regret bound of the multiplicative weights update method. This connection allows us to avoid the technical complications in previous works and improve upon the run-time analysis of a gradient-descent-based algorithm for robust mean estimation by Cheng, Diakonikolas, Ge and Soltanolkotabi (ICML '20).


page 1

page 2

page 3

page 4


A Unified Approach to Robust Mean Estimation

In this paper, we develop connections between two seemingly disparate, b...

Outlier-Robust Sparse Mean Estimation for Heavy-Tailed Distributions

We study the fundamental task of outlier-robust mean estimation for heav...

Regret Minimization in Heavy-Tailed Bandits

We revisit the classic regret-minimization problem in the stochastic mul...

No-Regret Reinforcement Learning with Heavy-Tailed Rewards

Reinforcement learning algorithms typically assume rewards to be sampled...

On Private and Robust Bandits

We study private and robust multi-armed bandits (MABs), where the agent ...

Cooperative Multi-Agent Bandits with Heavy Tails

We study the heavy-tailed stochastic bandit problem in the cooperative m...

On Proximal Policy Optimization's Heavy-tailed Gradients

Modern policy gradient algorithms, notably Proximal Policy Optimization ...

Please sign up or login with your details

Forgot password? Click here to reset