2004
DOI: 10.1016/j.laa.2003.12.018
|View full text |Cite
|
Sign up to set email alerts
|

Hypergraph-based parallel computation of passage time densities in large semi-Markov models

Abstract: Passage time densities and quantiles are important performance and quality of service metrics, but their numerical derivation is, in general, computationally expensive. We present an iterative algorithm for the calculation of passage time densities in semi-Markov models, along with a theoretical analysis and empirical measurement of its convergence behaviour. In order to implement the algorithm efficiently in parallel, we use hypergraph partitioning to minimise communication between processors and to balance w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
47
0

Year Published

2004
2004
2012
2012

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 26 publications
(47 citation statements)
references
References 18 publications
0
47
0
Order By: Relevance
“…Finally, in the example presented here several hundred successive matrix-vector multiplications are performed, but in other techniques for response time density calculation (e.g. [7]) many millions of such operations are performed, thus reducing the relative overhead of the hypergraph partitioning step.…”
Section: Discussionmentioning
confidence: 99%
“…Finally, in the example presented here several hundred successive matrix-vector multiplications are performed, but in other techniques for response time density calculation (e.g. [7]) many millions of such operations are performed, thus reducing the relative overhead of the hypergraph partitioning step.…”
Section: Discussionmentioning
confidence: 99%
“…Having tested the iterative passage-time analysis technique on very small models, we demonstrate that it is scalable to large models by summarising analysis of a 15 million state SMP model of a distributed web server [5], shown in Fig. 5.…”
Section: A Distributed Web-server Cluster Modelmentioning
confidence: 99%
“…7 shows the density of the time taken to perform 100 reads and 50 page updates in the web server model. Calculation of the 35 t-points plotted required 2 days, 17 hours and 30 minutes using 64 slave processors [5]. Our algorithm evaluated L i j (s) at 1155 s-points, each of which involved manipulating sparse matrices of rank 15,445,919.…”
Section: Passage-time Densitiesmentioning
confidence: 99%
See 1 more Smart Citation
“…This builds upon our iterative technique for generating passage time densities and quantiles [1,2]. The algorithm is based on the calculation and subsequent numerical inversion of Laplace transforms.…”
Section: Introductionmentioning
confidence: 99%