Extending Lattice linearity for Self-Stabilizing Algorithms

09/27/2021
by   Arya Tanmay Gupta, et al.
0

In this article, we focus on extending the notion of lattice linearity to self-stabilizing programs. Lattice linearity allows a node to execute its actions with old information about the state of other nodes and still preserve correctness. It increases the concurrency of the program execution by eliminating the need for synchronization among its nodes. The extension – denoted as eventually lattice linear algorithms – is performed with an example of the service-demand based minimal dominating set (SDDS) problem, which is a generalization of the dominating set problem; it converges in 2n moves. Subsequently, we also show that the same approach could be used in various other problems including minimal vertex cover, maximal independent set and graph coloring.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset