1969
DOI: 10.1137/0307027
|View full text |Cite
|
Sign up to set email alerts
|

Continuous Time Markovian Sequential Control Processes

Abstract: Consider a stochastic system with a finite state space and a finite action space. Between actions, the waiting time to transition is a random variable with a continuous distribution function depending only on the current state and the action taken. There are positive costs of taking actions and the system earns at a rate depending upon the state of the system and the action taken. We allow actions to be taken between transitions. A policy for which there is a positive probability of an action between transitio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
4
0

Year Published

1974
1974
2016
2016

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 16 publications
(4 citation statements)
references
References 6 publications
0
4
0
Order By: Relevance
“…Concerning the optimal control of semi-Markov processes, the case of a finite number of states has been studied in [5], [16], [18], [24], while the case of arbitrary state space is considered in [26] and [28]. As in [5] and in [28], in our formulation we admit control actions that can depend not only on the state process but also on the length of time the process has remained in that state. The approach based on BSDEs is classical in the diffusive context and is also present in the literature in the case of BSDEs with jumps, see as instance [23].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Concerning the optimal control of semi-Markov processes, the case of a finite number of states has been studied in [5], [16], [18], [24], while the case of arbitrary state space is considered in [26] and [28]. As in [5] and in [28], in our formulation we admit control actions that can depend not only on the state process but also on the length of time the process has remained in that state. The approach based on BSDEs is classical in the diffusive context and is also present in the literature in the case of BSDEs with jumps, see as instance [23].…”
Section: Introductionmentioning
confidence: 99%
“…Finally, BSDEs driven by a random measure related to a pure jump process have been recently studied in [6], and in [7] the pure jump Markov case is considered.Our backward equation (1.2) is driven by a random measure associated to a two dimensional Markov process (X, a), and his compensator is a stochastic random measure with a non-dominated intensity as in [7]. Even if the associated process is not pure jump, the existence, uniqueness and continuous dependence on the data for the BSDE (1.2) can be deduced extending in a straightforward way the results in [7].Concerning the optimal control of semi-Markov processes, the case of a finite number of states has been studied in [5], [16], [18], [24], while the case of arbitrary state space is considered in [26] and [28]. As in [5] and in [28], in our formulation we admit control actions that can depend not only on the state process but also on the length of time the process has remained in that state.…”
mentioning
confidence: 99%
“…A semi-Markov process, as studied in the literature on decision processes, may be described as a right-continuous jump process in which the times between successive jumps are not necessarily exponentially distributed, and after a jump the conditional distribution of the future given the past depends only on the current state. Various authors [1], [8], [12], [13], [14] have derived necessary and sufficient conditions for the optimality of a policy in controlling such processes. Ross [12], [13] restricted to the policies in which actions can be changed only at jump epochs.…”
Section: Introductionmentioning
confidence: 99%
“…Ross [12], [13] restricted to the policies in which actions can be changed only at jump epochs. Chitgopekar [1] and Stone [14] allowed the policies to change the action between jumps. However, in all these papers the state is assumed to remain constant between jumps thus excluding many interesting problems from consideration.…”
Section: Introductionmentioning
confidence: 99%