2007
DOI: 10.1007/s00245-007-9025-6
|View full text |Cite
|
Sign up to set email alerts
|

Characterizations of Overtaking Optimality for Controlled Diffusion Processes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
32
0

Year Published

2008
2008
2013
2013

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 71 publications
(32 citation statements)
references
References 30 publications
0
32
0
Order By: Relevance
“…Hence, we have two trivial cases: if h 0 is strictly increasing (h 0 > 0) or decreasing (h 0 < 0), then the control policy f a (x) ≡ a or, respectively f 0 (x) ≡ 0 is the unique policy that attains the maximum in (6.2); in other words, F −1 is the singleton {f a } or {f 0 }. This set coincides with the set of average optimal policies, according to Theorem 3.3 of [18] (see also [2], [4], and [10]). Now suppose that h 0 attains a maximum, say x 0 .…”
Section: Blackwell Optimality For Controlled Diffusionsmentioning
confidence: 68%
See 2 more Smart Citations
“…Hence, we have two trivial cases: if h 0 is strictly increasing (h 0 > 0) or decreasing (h 0 < 0), then the control policy f a (x) ≡ a or, respectively f 0 (x) ≡ 0 is the unique policy that attains the maximum in (6.2); in other words, F −1 is the singleton {f a } or {f 0 }. This set coincides with the set of average optimal policies, according to Theorem 3.3 of [18] (see also [2], [4], and [10]). Now suppose that h 0 attains a maximum, say x 0 .…”
Section: Blackwell Optimality For Controlled Diffusionsmentioning
confidence: 68%
“…In Section 2 we introduce the control system and our main assumptions. In addition, we define the optimality criteria we are concerned with, and we summarize some known results on the Hamilton-Jacobi-Bellman (HJB) equation [2], [4], [8], [10], [17], [18], which is essentially our point of departure to analyze m-discount optimality and Blackwell optimality. In Section 3 we express the expected α-discounted v-reward (see (3.6)) for some function v as a Laurent series (see (3.11)).…”
Section: Dx(t) = B(x(t) U(t)) Dt + σ (X(t)) Db(t) For All T ≥ 0 and mentioning
confidence: 99%
See 1 more Smart Citation
“…For continuous-time models, however, just a few references deal with this issue. For instance, Puterman [18] studied controlled diffusions on compact intervals and Jasso-Fuentes and Hernández-Lerma [12] considered general controlled diffusions. Regarding jump processes with nonfinite state space, Prieto-Rumeau and Hernández-Lerma [16] analyzed the case of a denumerable state space.…”
Section: Introductionmentioning
confidence: 99%
“…[11], [17], and [23]), the bias and the overtaking optimality criteria (that choose an average optimal policy with the maximal expected reward growth as the time horizon goes to ∞; see, e.g. [7], [8], [10, p. 132], [12], [16], and [19,Chapter 10]), and the so-called discountsensitive criteria (which choose policies that are asymptotically optimal as the discount rate converges to 0; see [7], [13], [15], [19,Chapter 10], and [22]), among others.…”
Section: Introductionmentioning
confidence: 99%