Byzantine-robust distributed one-step estimation

07/15/2023
by   Chuhan Wang, et al.
0

This paper proposes a Robust One-Step Estimator(ROSE) to solve the Byzantine failure problem in distributed M-estimation when a moderate fraction of node machines experience Byzantine failures. To define ROSE, the algorithms use the robust Variance Reduced Median Of the Local(VRMOL) estimator to determine the initial parameter value for iteration, and communicate between the node machines and the central processor in the Newton-Raphson iteration procedure to derive the robust VRMOL estimator of the gradient, and the Hessian matrix so as to obtain the final estimator. ROSE has higher asymptotic relative efficiency than general median estimators without increasing the order of computational complexity. Moreover, this estimator can also cope with the problems involving anomalous or missing samples on the central processor. We prove the asymptotic normality when the parameter dimension p diverges as the sample size goes to infinity, and under weaker assumptions, derive the convergence rate. Numerical simulations and a real data application are conducted to evidence the effectiveness and robustness of ROSE.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/04/2021

Variance Reduced Median-of-Means Estimator for Byzantine-Robust Distributed Inference

This paper develops an efficient distributed inference algorithm, which ...
research
06/15/2020

Distributed Newton Can Communicate Less and Resist Byzantine Workers

We develop a distributed second order optimization algorithm that is com...
research
06/12/2019

Communication-Efficient Accurate Statistical Estimation

When the data are stored in a distributed manner, direct application of ...
research
03/08/2023

Byzantine-Robust Loopless Stochastic Variance-Reduced Gradient

Distributed optimization with open collaboration is a popular field sinc...
research
06/01/2022

Byzantine-Robust Online and Offline Distributed Reinforcement Learning

We consider a distributed reinforcement learning setting where multiple ...
research
03/17/2021

Escaping Saddle Points in Distributed Newton's Method with Communication efficiency and Byzantine Resilience

We study the problem of optimizing a non-convex loss function (with sadd...

Please sign up or login with your details

Forgot password? Click here to reset