2018
DOI: 10.1007/978-3-319-89963-3_19
|View full text |Cite
|
Sign up to set email alerts
|

Multi-cost Bounded Reachability in MDP

Abstract: Abstract. We provide an efficient algorithm for multi-objective modelchecking problems on Markov decision processes (MDPs) with multiple cost structures. The key problem at hand is to check whether there exists a scheduler for a given MDP such that all objectives over cost vectors are fulfilled. Reachability and expected cost objectives are covered and can be mixed. Empirical evaluation shows the algorithm's scalability. We discuss the need for output beyond Pareto curves and exploit the available information … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
23
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
6
2

Relationship

7
1

Authors

Journals

citations
Cited by 24 publications
(25 citation statements)
references
References 47 publications
2
23
0
Order By: Relevance
“…Further properties include temporal logic formulas based on PCTL [66] and CSL [8,11], conditional probability and cost queries [13,14], long-run average values [6,28,44] (also known as steady-state or mean payoff values), cost-bounded properties [69] (see Sect. 4.2), and support for multi-objective queries [51,109] (see Sect.…”
Section: Propertiesmentioning
confidence: 99%
“…Further properties include temporal logic formulas based on PCTL [66] and CSL [8,11], conditional probability and cost queries [13,14], long-run average values [6,28,44] (also known as steady-state or mean payoff values), cost-bounded properties [69] (see Sect. 4.2), and support for multi-objective queries [51,109] (see Sect.…”
Section: Propertiesmentioning
confidence: 99%
“…For MDP, step-and rewardbounded reachability probabilities can be converted to total reward objectives by unfolding the current amount of steps (or rewards) into the state-space of the model. Approaches that avoid such an expensive unfolding have been presented in [28] for objectives with step-bounds and in [34,35] for objectives with one or multiple reward-bounds. Time-bounded reachability probabilities for MA have been considered in [47].…”
Section: Combining Long-run Average and Total Rewardsmentioning
confidence: 99%
“…Surveys on multi-objective decision making in AI and machine learning can be found in [51] and [58], respectively. This article is an extended version of a previous conference paper [32]. We provide more details on the core algorithms, extended proofs, an expanded explanation of our visualisations, and additional models in the experimental evaluation.…”
Section: Related Workmentioning
confidence: 99%