2018
DOI: 10.1007/978-3-030-03421-4_22
|View full text |Cite
|
Sign up to set email alerts
|

Lightweight Statistical Model Checking in Nondeterministic Continuous Time

Abstract: Lightweight scheduler sampling brings statistical model checking to nondeterministic formalisms with undiscounted properties, in constant memory. Its direct application to continuous-time models is rendered ineffective by their dense concrete state spaces and the need to consider continuous input for optimal decisions. In this paper we describe the challenges and state of the art in applying lightweight scheduler sampling to three continuous-time formalisms: After a review of recent work on exploiting discrete… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
24
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
7
1

Relationship

3
5

Authors

Journals

citations
Cited by 20 publications
(24 citation statements)
references
References 33 publications
0
24
0
Order By: Relevance
“…In [HMZ+12], candidates for optimal strategies are generated and gradually improved, but "at any given point we cannot quantify how close to optimal the candidate scheduler is" (cited from [HMZ+12]) and the algorithm "does not in general converge to the true optimum" (cited from [LST14]). Further, [LST14,DLST15,DHS18] randomly sample compact representation of strategies, resulting in useful lower bounds if ε-schedulers are frequent. [HPS+19] gives a convergent model-free algorithm (with no bounds on the current error) and identifies that the previous [SKC+14] "has two faults, the second of which also affects approaches [...] [HAK18,HAK19]".…”
Section: Related Workmentioning
confidence: 99%
“…In [HMZ+12], candidates for optimal strategies are generated and gradually improved, but "at any given point we cannot quantify how close to optimal the candidate scheduler is" (cited from [HMZ+12]) and the algorithm "does not in general converge to the true optimum" (cited from [LST14]). Further, [LST14,DLST15,DHS18] randomly sample compact representation of strategies, resulting in useful lower bounds if ε-schedulers are frequent. [HPS+19] gives a convergent model-free algorithm (with no bounds on the current error) and identifies that the previous [SKC+14] "has two faults, the second of which also affects approaches [...] [HAK18,HAK19]".…”
Section: Related Workmentioning
confidence: 99%
“…On the downside, we were unable to find counterexamples for some faulty variants and properties. This calls for future research, exploiting techniques to guide the simulation towards rare bugs/events [7,10,21] or towards uncovered variants relying, e.g., on distance-based sampling [22] or light-weight scheduling sampling [19]. Nevertheless, the positive outcome of our study is to show that SMC can act as a low-cost-high-reward alternative to exhaustive verification, which can provide thorough results in a majority of cases.…”
Section: Resultsmentioning
confidence: 88%
“…Finally, an avenue that avoids storing a (complete) model are simulation-based approaches (statistical model checking [2]) and variants of reinforcement learning, possibly with neural networks. For MDPs, these approaches yield weak statistical guarantees [20], but may provide good oracles.…”
Section: Statistical Methods and (Deep) Reinforcement Learningmentioning
confidence: 99%