How fast can we reach a target vertex in stochastic temporal graphs?
Temporal graphs are used to abstractly model real-life networks that are inherently dynamic in nature. Given a static underlying graph G=(V,E), a temporal graph on G is a sequence of snapshots G_t, one for each time step t≥ 1. In this paper we study stochastic temporal graphs, i.e. stochastic processes G whose random variables are the snapshots of a temporal graph on G. A natural feature observed in various real-life scenarios is a memory effect in the appearance probabilities of particular edges; i.e. the probability an edge e∈ E appears at time step t depends on its appearance (or absence) at the previous k steps. In this paper we study the hierarchy of models memory-k, addressing this memory effect in an edge-centric network evolution: every edge of G has its own independent probability distribution for its appearance over time. Clearly, for every k≥ 1, memory-(k-1) is a special case of memory-k. We make a clear distinction between the values k=0 ("no memory") and k≥ 1 ("some memory"), as in some cases these models exhibit a fundamentally different computational behavior, as our results indicate. For every k≥ 0 we investigate the complexity of two naturally related, but fundamentally different, temporal path (journey) problems: MINIMUM ARRIVAL and BEST POLICY. In the first problem we are looking for the expected arrival time of a foremost journey between two designated vertices s,y. In the second one we are looking for the arrival time of the best policy for actually choosing a particular s-y journey. We present a detailed investigation of the computational landscape of both problems for the different values of memory k. Among other results we prove that, surprisingly, MINIMUM ARRIVAL is strictly harder than BEST POLICY; in fact, for k=0, MINIMUM ARRIVAL is #P-hard while BEST POLICY is solvable in O(n^2) time.
READ FULL TEXT