2011 23rd Euromicro Conference on Real-Time Systems 2011
DOI: 10.1109/ecrts.2011.30
|View full text |Cite
|
Sign up to set email alerts
|

Scalable Utility Aware Scheduling Heuristics for Real-time Tasks with Stochastic Non-preemptive Execution Intervals

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
1
0

Year Published

2011
2011
2022
2022

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 20 publications
0
1
0
Order By: Relevance
“…(For related problems in this area, cf. Manolache, Eles, and Peng, 2001;Mills and Anderson, 2011;Tidwell, Bass, Lasker, Wylde, Gill, and Smart, 2011.) It should be emphasized that although the computation of the optimal ES policy for 16 jobs takes about 2 hours, the execution of the found ES policy only requires milliseconds of computation time and can therefore also be integrated in a real-time system with very restrictive time limits.…”
Section: Results For Instances With N = 16mentioning
confidence: 98%
“…(For related problems in this area, cf. Manolache, Eles, and Peng, 2001;Mills and Anderson, 2011;Tidwell, Bass, Lasker, Wylde, Gill, and Smart, 2011.) It should be emphasized that although the computation of the optimal ES policy for 16 jobs takes about 2 hours, the execution of the found ES policy only requires milliseconds of computation time and can therefore also be integrated in a real-time system with very restrictive time limits.…”
Section: Results For Instances With N = 16mentioning
confidence: 98%
“…RT scheduling is probabilistic, if it applies some laws of probability to select tasks or to compute some timing parameters (Tidwell, 2011). Some probabilistic RT scheduling algorithms model the problem as queues theory.…”
Section: Real Time Schedulingmentioning
confidence: 99%
“…In the context of a PhD thesis, (Tidwell, 2011) defined a Markov Decision Process (MDP) model that enables to derive value-optimal schedulers, and also provides a formal framework for comparing the performance of different scheduling policies. He equally showed how the problem structure allows to bound the number of states in the MDP by wrapping states into a finite number of exemplar states.…”
Section: Reinforcement Learningmentioning
confidence: 99%