Fully Scalable Massively Parallel Algorithms for Embedded Planar Graphs

04/15/2023
by   Yi-Jun Chang, et al.
0

We consider the massively parallel computation (MPC) model, which is a theoretical abstraction of large-scale parallel processing models such as MapReduce. In this model, assuming the widely believed 1-vs-2-cycles conjecture, it is not possible to solve many basic graph problems in constant rounds with strongly sublinear memory size per machine. Recently, Holm and Tětek [SODA 2023] showed that it is possible to get around this barrier for planar graphs when a planar straight-line embedding of the graph is given. For such inputs on n vertices, they obtained constant-round MPC algorithms for connected components, minimum spanning tree (MST), and O(1)-approximation of st-shortest path, diameter, and radius, as long as the memory size per machine is 𝒮 = n^2/3 + Ω(1). In this work, we provide an improved recursive framework to obtain constant-round algorithms in the more challenging fully scalable regime where memory size per machine can be 𝒮 = n^δ for any given constant δ > 0. This gives the first constant-round algorithms in this regime for fundamental problems such as connected components, MST, and EMST. Moreover, we show that ε-emulators can be incorporated into our recursive framework to obtain constant-round (1+ε)-approximation algorithms for single source shortest path (SSSP) and shortest cycle in embedded planar graphs. We show that it is possible to construct a dual graph of the given embedded planar graph in constant rounds, which allows us to solve the (1+ε)-approximate st-maximum flow and minimum cut problem as both reduce to a shortest cycle problem in the dual graph. Using O(n^2) total space, we also obtain constant-round algorithms for (1+ε)-approximate all-pairs shortest paths (APSP), diameter, and radius.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro