2014 IEEE 19th Real-Time and Embedded Technology and Applications Symposium (RTAS) 2014
DOI: 10.1109/rtas.2014.6925992
|View full text |Cite
|
Sign up to set email alerts
|

Hiding memory latency using fixed priority scheduling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 28 publications
(19 citation statements)
references
References 13 publications
0
19
0
Order By: Relevance
“…The aim is to enable more efficient operation whereby the memory phase of one task overlaps with the execution phase of another. Yao et al (2012) presented a TDMA scheduling algorithm for PREM tasks on a multicore, and Wasly and Pellizzoni (2014) provided schedulability analysis for nonpreemptable PREM tasks on a partitioned multicore. Lampka et al (2014) proposed a formal approach for bounding the worst-case response time of concurrently executing real-time tasks under resource contention and almost arbitrarily complex resource arbitration policies, with a focus on main memory as a shared resource.…”
Section: Related Work Assuming Different Application Modelsmentioning
confidence: 99%
“…The aim is to enable more efficient operation whereby the memory phase of one task overlaps with the execution phase of another. Yao et al (2012) presented a TDMA scheduling algorithm for PREM tasks on a multicore, and Wasly and Pellizzoni (2014) provided schedulability analysis for nonpreemptable PREM tasks on a partitioned multicore. Lampka et al (2014) proposed a formal approach for bounding the worst-case response time of concurrently executing real-time tasks under resource contention and almost arbitrarily complex resource arbitration policies, with a focus on main memory as a shared resource.…”
Section: Related Work Assuming Different Application Modelsmentioning
confidence: 99%
“…A new memory-buffer chip called Centaur, which provides up to 128 MB of embedded DRAM buffer cache per processor along with an improved DRAM scheduler, was proposed in [15]. In [17], a dynamic scheduling algorithm was proposed for a set of sporadic realtime tasks that efficiently co-schedule a processor and a DMA execution to hide memory transfer latency. In [17], a dynamic scheduling algorithm was proposed for a set of sporadic realtime tasks that efficiently co-schedule a processor and a DMA execution to hide memory transfer latency.…”
Section: Previous Workmentioning
confidence: 99%
“…In addition, a previous work [6] has proposed a system with DMA peripheral to overlap memory transfer with CPU computation in order to hide memory latency. A partitioned non-preemptive scheduling of cores and DMA has been introduced in [13]. Although the partitioned approach showed good results in terms of hiding access latency to main memory, the partitioned system is not always preferable as we mentioned early.…”
Section: A Memory and Cpu Co-schedulingmentioning
confidence: 99%
“…We applied the overlapping mechanism on previous work [13] for fixed-priority scheduling on a single core, albeit the work can be extended to partitioned multicore scheduling using TDMA arbitration in main memory. Although this work showed good results in terms of hiding access latency to main memory, the partitioned system is not always preferable as it requires design-time decisions to statically assign tasks to cores.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation