2002
DOI: 10.1007/s00362-002-0124-9
|View full text |Cite
|
Sign up to set email alerts
|

Is a small Monte Carlo analysis a good analysis?

Abstract: In this paper we study the relationship between the number of replications and the accuracy of the estimated quantiles of a distribution obtained by simulation. A method for testing hypotheses on the quantiles of a theoretical distribution using the simulated distribution is proposed, as well as a method to check the hypothesis of consistency of a test.Financial support from research projects PB96-1469-C05-01, UPV-038.321-G55/98 and PI9970 is gratefully acknowledged.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
10
0

Year Published

2006
2006
2022
2022

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 22 publications
(11 citation statements)
references
References 8 publications
1
10
0
Order By: Relevance
“…To this end, we keep l c and n ω fixed and equal to 4 and 10 4 , respectively. The latter value is also similar to those used in the literature [7,44,51,58,69,99,118] and consistent with the theoretical results on the accuracy of mc sampling presented in [33]. Analogously to the previous subsection, we report the results obtained for different values of the weight coefficient w, which impacts the number of the independent variables z : Ω → R nz preserved after the model order reduction described in Appendix A.4.…”
Section: Computational Speedsupporting
confidence: 89%
See 1 more Smart Citation
“…To this end, we keep l c and n ω fixed and equal to 4 and 10 4 , respectively. The latter value is also similar to those used in the literature [7,44,51,58,69,99,118] and consistent with the theoretical results on the accuracy of mc sampling presented in [33]. Analogously to the previous subsection, we report the results obtained for different values of the weight coefficient w, which impacts the number of the independent variables z : Ω → R nz preserved after the model order reduction described in Appendix A.4.…”
Section: Computational Speedsupporting
confidence: 89%
“…The major problem with sampling-based methods, however, is in sampling per se: one has to obtain a sufficiently large number of readings of the quantity of interest in order to accurately estimate the necessary statistics about this quantity, and the number of required samples can be considerable [33]. In the case of mc sampling, for instance, the error can be halved by quadrupling the number of sample points.…”
Section: Previous Workmentioning
confidence: 99%
“…Dynamic power profiles involved in the experiments are based on simulations of randomly generated applications defined as directed acyclic task graphs. 9 The floorplans of the platforms are constructed in such a way that the processing elements form regular grids. 10 The time step of power and temperature traces is set to 1 ms (see Section IV), which is also the time step of the recurrence in (7).…”
Section: Resultsmentioning
confidence: 99%
“…2 This means that, to get an additional decimal point of accuracy, one has to obtain hundred times more samples. Each such sample implies a complete realization of the whole system, which renders MC-based methods slow and often infeasible since the needed number of simulations can be extremely large [9].…”
Section: Introductionmentioning
confidence: 99%
“…The major problem with sampling techniques, however, is in sampling: one should be able to obtain sufficient many realizations of the metric of interest in order to accurately estimate the needed statistics about that metric [3]. When the subject under analysis is expensive to evaluate, sampling methods are rendered slow and often unfeasible.…”
Section: Introductionmentioning
confidence: 99%