Fast and Robust State Estimation and Tracking via Hierarchical Learning

06/29/2023
by   Connor Mclaughlin, et al.
0

Fully distributed estimation and tracking solutions to large-scale multi-agent networks suffer slow convergence and are vulnerable to network failures. In this paper, we aim to speed up the convergence and enhance the resilience of state estimation and tracking using a simple hierarchical system architecture wherein agents are clusters into smaller networks, and a parameter server exists to aid the information exchanges among networks. The information exchange among networks is expensive and occurs only once in a while. We propose two consensus + innovation algorithms for the state estimation and tracking problems, respectively. In both algorithms, we use a novel hierarchical push-sum consensus component. For the state estimation, we use dual averaging as the local innovation component. State tracking is much harder to tackle in the presence of dropping-link failures and the standard integration of the consensus and innovation approaches are no longer applicable. Moreover, dual averaging is no longer feasible. Our algorithm introduces a pair of additional variables per link and ensure the relevant local variables evolve according to the state dynamics, and use projected local gradient descent as the local innovation component. We also characterize the convergence rates of both of the algorithms under linear local observation model and minimal technical assumptions. We numerically validate our algorithm through simulation of both state estimation and tracking problems.

READ FULL TEXT
research
05/08/2023

Consensus analysis of random sub-graphs for distributed filtering with link failures

In this paper we carry out a stability analysis of a distributed consens...
research
07/27/2023

Network Fault-tolerant and Byzantine-resilient Social Learning via Collaborative Hierarchical Non-Bayesian Learning

As the network scale increases, existing fully distributed solutions sta...
research
05/13/2023

Network-GIANT: Fully distributed Newton-type optimization via harmonic Hessian consensus

This paper considers the problem of distributed multi-agent learning, wh...
research
03/31/2020

A Robust Gradient Tracking Method for Distributed Optimization over Directed Networks

In this paper, we consider the problem of distributed consensus optimiza...
research
08/22/2018

Distributed Big-Data Optimization via Block-Iterative Gradient Tracking

We study distributed big-data nonconvex optimization in multi-agent netw...
research
08/22/2018

Distributed Big-Data Optimization via Block-wise Gradient Tracking

We study distributed big-data nonconvex optimization in multi-agent netw...
research
03/06/2018

On Simple Back-Off in Complicated Radio Networks

In this paper, we study local and global broadcast in the dual graph mod...

Please sign up or login with your details

Forgot password? Click here to reset