Probabilistic model-checking is a field which seeks to automate the formal analysis of probabilistic models such as Markov chains. In this thesis, we study and develop the stochastic Markov reward model (sMRM) which extends the Markov chain with rewards as random variables. The model recently being introduced, does not have much in the way of techniques and algorithms for their analysis. The purpose of this study is to derive such algorithms that are both scalable and accurate.Additionally, we derive the necessary theory for probabilistic model-checking of sMRMs against existing temporal logics such as PRCTL. We present the equations for computing first-passage reward densities, expected value problems, and other reachability problems.Our focus however is on finding strictly numerical solutions for first-passage reward densities. We solve for these by firstly adapting known direct linear algebra algorithms such as Gaussian elimination, and iterative methods such as the power method, Jacobi and Gauss-Seidel. We provide solutions for both discrete-reward sMRMs, where all rewards discrete (lattice) random variables. And also for continuous-reward sMRMs, where all rewards are strictly continuous random variables, but not necessarily having continuous probability density functions (pdfs). Our solutions involve the use of fast Fourier transform (FFT) for faster computation, and we adapted existing quadrature rules for convolution to gain more accurate solutions, rules such as the trapezoid rule, Simpson's rule or Romberg's method.In the discrete-reward setting, existing solutions are either derived by hands, or a combination of graph-reduction algorithms and symbolically solving them via computer algebra systems. The symbolic approach is not scalable, and for this we present strictly numerical but relatively more scalable algorithms. We found each -direct and iterative -capable of solving problems with larger state spaces. The best performer was the power method, owed partially to its simplicity, leading to easier vectorization of its implementation. Whilst, the Gauss-Seidel method was shown to converge with fewer iterations, it was slower due to costs of deconvolution. The Gaussian Elimination algorithm performed poorly relative to these.In the continuous-reward setting, existing solutions are adaptable from literature on semi-Markov processes. However, it appears that other algorithms should still be researched for the cases where rewards have discontinuous pdfs. The algorithm we have developed has the ability to resolve such a case, albeit the solution does not appear as scalable as the discrete-reward setting.
AcknowledgementsMuch thanks goes to my supervisor Professor David Parker, who opened for me the doors to academia. Again thanks goes to him for his guidance, and his reviewing of my work.I also thank my RSMG group members, Professor Peter Hancox and Dr. Shan He, both of whom motivated me and trusted that I could accomplish this. And also thanks are to be given to Associate Professor Nick Hawes, and Professor ...