2000
DOI: 10.1214/aoap/1019487513
|View full text |Cite
|
Sign up to set email alerts
|

Discrete-review policies for scheduling stochastic networks: trajectory tracking and fluid-scale asymptotic optimality

Abstract: This paper describes a general approach for dynamic control of stochastic networks based on fluid model analysis, where in broad terms, the stochastic network is approximated by its fluid analog, an associated fluid control problem is solved and, finally, a scheduling rule for the original system is defined by interpreting the fluid control policy.The main contribution of this paper is to propose a general mechanism for translating the solution of the fluid optimal control problem into an implementable discret… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
59
0

Year Published

2002
2002
2013
2013

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 93 publications
(60 citation statements)
references
References 35 publications
1
59
0
Order By: Relevance
“…also Meyn (2000)) we say Definition: (2000) and Maglaras (1998) it has been shown that a (non-stationary) asymptotically optimal policy can always be constructed if V F (x) < ∞. We will now show that the set of asymptotically optimal policies contain the average cost optimal policies which are solutions of the average cost optimality equation.…”
Section: Asymptotic Optimalitymentioning
confidence: 98%
See 2 more Smart Citations
“…also Meyn (2000)) we say Definition: (2000) and Maglaras (1998) it has been shown that a (non-stationary) asymptotically optimal policy can always be constructed if V F (x) < ∞. We will now show that the set of asymptotically optimal policies contain the average cost optimal policies which are solutions of the average cost optimality equation.…”
Section: Asymptotic Optimalitymentioning
confidence: 98%
“…However, it has been shown by Maglaras (1998) and Bäuerle (2000a) that asymptotically optimal policies can always be constructed. The construction in Maglaras (1998) is such that the state of the network is reviewed at discrete time points and the actions which have to be carried out over the next planning period are computed from a linear program.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, we do not have any result on the suboptimality gap. In the literature, asymptotic fluid optimality results have been obtained for various dynamic scheduling problems in queueing models, see for example [28,9,21,27,31]. More precisely, it is shown that when employing the optimal control resulting from the fluid model to the stochastic model, the fluid-scaled cost converges to the optimal cost of the fluid control model, the latter being in fact a provable lower bound on the stochastic cost.…”
Section: Optimal Control Comparison Of Stochastic Model With Fluid Modelmentioning
confidence: 99%
“…See for example [6] where this is shown for the cµ-rule in a multi-class single-server queue and [10] where this is shown for Klimov's rule in a multi-class queue with feedback. For other cases, researchers have aimed at establishing that the fluid control is asymptotically optimal, that is, the fluid-based control is optimal for the stochastic optimization problem after a suitable scaling, see for example [28,9,21,27,31]. We conclude by mentioning that the fluid approach owes its popularity to the groundbreaking result stating that if the fluid model drains in finite time, the stochastic process is stable, see [18,30].…”
Section: Introductionmentioning
confidence: 99%