2016
DOI: 10.1080/17442508.2016.1197925
|View full text |Cite
|
Sign up to set email alerts
|

Optimal impulsive control of piecewise deterministic Markov processes

Abstract: In this paper, we study the infinite-horizon expected discounted continuous-time optimal control problem for Piecewise Deterministic Markov Processes with both impulsive and gradual (also called continuous) controls. The set of admissible control strategies is supposed to be formed by policies possibly randomized and depending on the past-history of the process. We assume that the gradual control acts on the jump intensity and on the transition measure, but not on the flow. The so-called Hamilton-Jacobi-Bellma… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

2
23
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 21 publications
(25 citation statements)
references
References 13 publications
2
23
0
Order By: Relevance
“…On the other hand, if there is no drift and the trajectories are piecewise constant, the model is called a continuous‐time Markov decision process (CTMDP) . The impulse control means the following: At particular discrete time moments, the decision maker decides to intervene by instantaneously moving the process to some new point in the state space; that new point may be also random in the cases of CTMDP and PDMP . Then, restarting at this new point, the process runs until the next intervention and so on.…”
Section: Introductionmentioning
confidence: 99%
See 4 more Smart Citations
“…On the other hand, if there is no drift and the trajectories are piecewise constant, the model is called a continuous‐time Markov decision process (CTMDP) . The impulse control means the following: At particular discrete time moments, the decision maker decides to intervene by instantaneously moving the process to some new point in the state space; that new point may be also random in the cases of CTMDP and PDMP . Then, restarting at this new point, the process runs until the next intervention and so on.…”
Section: Introductionmentioning
confidence: 99%
“…Sometimes, such control is called “singular control.” The goal is to minimize the total (expected) accumulated cost, which may be discounted or not . The popular method of attack to such problems is dynamic programming . In other works, versions of the Pontryagin maximum principle are used.…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations