2018
DOI: 10.1016/j.ifacol.2018.11.428
|View full text |Cite
|
Sign up to set email alerts
|

A numerical approach to joint continuous and impulsive control of Markov chains

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 16 publications
0
8
0
Order By: Relevance
“…On the basis of stability analysis, the optimality property of the impulsive controller is a significant factor to be considered among the existing literature. Miller et al [25] solved the optimal impulsive control problems of finite-state Markov chains in continuous time, based on the dynamic programming algorithm and by solving the quasivariational This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ inequality.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…On the basis of stability analysis, the optimality property of the impulsive controller is a significant factor to be considered among the existing literature. Miller et al [25] solved the optimal impulsive control problems of finite-state Markov chains in continuous time, based on the dynamic programming algorithm and by solving the quasivariational This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ inequality.…”
Section: Introductionmentioning
confidence: 99%
“…However, there exist several defects in the traditional optimal impulsive control methods. First, for impulsive stochastic systems, the traditional state transition probability matrix P involved in [25]- [28] requires that the action time of the impulsive controller must be fixed at the current time k for each system state x ∈ X. Similarly, the multistep transition matrix P used by the traditional methods requires that, for each initial state x(k), the impulsive controller be applied to the system from time k to +k, of which the time span is strictly fixed as .…”
Section: Introductionmentioning
confidence: 99%
“…Consequently, the regular time-based transition matrices (P and P ) can be used to derive the evolution of the probability distribution at the prefixed time sequence, but not the evolution of the probability distribution across the impulsive actions. In summary, these facts indicate that the restrictions of the regular time-based transition matrices and the variable impulsive control cycles of the impulsive controller show no compatibility and conflict with each other, which causes the traditional impulsive control methods [24], [25], [26], [27] complicated and highly specialized with low generality and uniformity. Noticing that the "arrival of the impulsive action" can be treated as an event, therefore to address the above issues, a novel generalevent-based impulsive transition matrix is needed to represent the probability distribution evolving characteristics across the impulsive actions.…”
mentioning
confidence: 99%
“…Dufour et al [24] analyze the Hamilton-Jacobi-Bellman (HJB) equation associated with the optimal impulsive control problems of piecewise deterministic Markov processes (PDMPs). Miller et al [25] develop the martingale representation of the stochastic systems subject to joint impulsive and gradual controls, while constructing the optimal strategy based on the DP equation. Basu and Stettner [26] provide the optimal impulsive controller design schemes for zero-sum games under several weak assumptions and weak Feller conditions.…”
mentioning
confidence: 99%
See 1 more Smart Citation