Algorithms for reachability problems on stochastic Markov reward models

08/22/2021
by   Irfan Muhammad, et al.
0

Probabilistic model-checking is a field which seeks to automate the formal analysis of probabilistic models such as Markov chains. In this thesis, we study and develop the stochastic Markov reward model (sMRM) which extends the Markov chain with rewards as random variables. The model recently being introduced, does not have much in the way of techniques and algorithms for their analysis. The purpose of this study is to derive such algorithms that are both scalable and accurate. Additionally, we derive the necessary theory for probabilistic model-checking of sMRMs against existing temporal logics such as PRCTL. We present the equations for computing first-passage reward densities, expected value problems, and other reachability problems. Our focus however is on finding strictly numerical solutions for first-passage reward densities. We solve for these by firstly adapting known direct linear algebra algorithms such as Gaussian elimination, and iterative methods such as the power method, Jacobi and Gauss-Seidel. We provide solutions for both discrete-reward sMRMs, where all rewards discrete (lattice) random variables. And also for continuous-reward sMRMs, where all rewards are strictly continuous random variables, but not necessarily having continuous probability density functions (pdfs). Our solutions involve the use of fast Fourier transform (FFT) for faster computation, and we adapted existing quadrature rules for convolution to gain more accurate solutions, rules such as the trapezoid rule, Simpson's rule or Romberg's method.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset