2012 Ninth International Conference on Quantitative Evaluation of Systems 2012
DOI: 10.1109/qest.2012.19
|View full text |Cite
|
Sign up to set email alerts
|

Statistical Model Checking for Markov Decision Processes

Abstract: Abstract-Statistical Model Checking (SMC) is a computationally very efficient verification technique based on selective system sampling. One well identified shortcoming of SMC is that, unlike probabilistic model checking, it cannot be applied to systems featuring nondeterminism, such as Markov Decision Processes (MDP). We address this limitation by developing an algorithm that resolves nondeterminism probabilistically, and then uses multiple rounds of sampling and Reinforcement Learning to provably improve res… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

2
104
0

Year Published

2014
2014
2019
2019

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 85 publications
(106 citation statements)
references
References 24 publications
2
104
0
Order By: Relevance
“…In contrast, we show that the statistical approach from [20] does not always converge (see the example in the next section).…”
Section: Introductionmentioning
confidence: 78%
See 1 more Smart Citation
“…In contrast, we show that the statistical approach from [20] does not always converge (see the example in the next section).…”
Section: Introductionmentioning
confidence: 78%
“…Both the learning-based and the baseline approximate algorithms significantly improve upon a state-of-the-art statistical model checking algorithm, originally developed for MDPs [20]. That algorithm also uses sampling and reinforcement learning, but it needs to sample multiple (possibly many) times along the same path to obtain a good estimate of the quality function used for reinforcement [37].…”
Section: Introductionmentioning
confidence: 99%
“…For the former, some SMC-like approaches have recently been developed. They either work by iteratively optimising the decisions of an explicitly-stored scheduler [4,9], or by sampling from the scheduler space and iteratively improving a set of candidate near-optimal schedulers [5]. The former are heavyweight techniques because the size of the description of the (memoryless) scheduler is significant, and in the worst case is the size of the state space.…”
Section: Introductionmentioning
confidence: 99%
“…Existing results in temporal logic-constrained verification and control synthesis with unknown systems are mainly in two categories: The first uses statistical model checking and hypothesis testing for Markov chains [6] and MDPs [7]. The second applies inference algorithms to identify the unknown factors and adapt the controller with the inferred model (a probabilistic automaton, or a two-player deterministic game) of the system and its environment [8,9].…”
Section: Introductionmentioning
confidence: 99%
“…The second applies inference algorithms to identify the unknown factors and adapt the controller with the inferred model (a probabilistic automaton, or a two-player deterministic game) of the system and its environment [8,9]. Statistical model checking for MDPs [7] relies on sampling of the trajectories of Markov chains induced from the underlying MDP and policies to verify whether the probability of satisfying a bounded linear temporal logic constraint is greater than some quantity for all admissible policies. It is restricted to bounded linear temporal logic properties in order to make the sampling and checking for paths computationally feasible.…”
Section: Introductionmentioning
confidence: 99%