2019
DOI: 10.14736/kyb-2019-1-0114
|View full text |Cite
|
Sign up to set email alerts
|

First passage risk probability optimality for continuous time Markov decision processes

Abstract: Institute of Mathematics of the Czech Academy of Sciences provides access to digitized documents strictly for personal use. Each copy of any part of this document must contain these Terms of use.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 24 publications
0
5
0
Order By: Relevance
“…The existence of optimal policies here is guaranteed by using the non-explosion of the controlled state process (see Assumption 1 in our paper), while the existence of optimal policies is guaranteed by using the non-explosion of the controlled state process and the properties of the target set B (see Assumption 3.2 and 3.6 in [15]). (iii) According to different policies, the probability space and the optimality equation in our paper are different from those developed in [15].…”
Section: Introductionmentioning
confidence: 99%
See 4 more Smart Citations
“…The existence of optimal policies here is guaranteed by using the non-explosion of the controlled state process (see Assumption 1 in our paper), while the existence of optimal policies is guaranteed by using the non-explosion of the controlled state process and the properties of the target set B (see Assumption 3.2 and 3.6 in [15]). (iii) According to different policies, the probability space and the optimality equation in our paper are different from those developed in [15].…”
Section: Introductionmentioning
confidence: 99%
“…The results of this process can then be used to measure the risk of a stochastic system (economic and financial systems). Inspired by this situation, risk probability criteria have garnered significant attention and have been widely studied by [1,2,6,10,13,15,26,28,29,31] for Markov decision processes (for short MDPs).…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations