2014 International Conference on Embedded Computer Systems: Architectures, Modeling, and Simulation (SAMOS XIV) 2014
DOI: 10.1109/samos.2014.6893201
|View full text |Cite
|
Sign up to set email alerts
|

Extended performance analysis of the time predictable on-demand coherent data cache for multi- and many-core systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 19 publications
0
6
0
Order By: Relevance
“…Software approaches for cache coherence require changing the application to handle the different copies of shared data explicitly. For instance, [55] modified the application to protect accesses to shared data by using lock mechanisms such that only one core at any time has access to the shared data. In the worst-case, this approach performs as well as the sequential execution of tasks sharing data.…”
Section: Software Solutionsmentioning
confidence: 99%
“…Software approaches for cache coherence require changing the application to handle the different copies of shared data explicitly. For instance, [55] modified the application to protect accesses to shared data by using lock mechanisms such that only one core at any time has access to the shared data. In the worst-case, this approach performs as well as the sequential execution of tasks sharing data.…”
Section: Software Solutionsmentioning
confidence: 99%
“…main memory, are slow and may increase the WCET. On-Demand Coherent Cache [30] converts tasks accessing shared data to critical sections, hence disallowing concurrent execution of any tasks that share data. SWEL [31] focuses on high performance computing and message passing workloads.…”
Section: Related Workmentioning
confidence: 99%
“…2) Since private data does not cause any coherence interference, uncacheshared allows caching of only private data, while uncaching of all shared data. In Figure 8, uncache-shared has better performance than uncache-all for all applications with a geometric mean slowdown of 2.11× Nonetheless, uncache-shared requires additional hardware and software modifications to distinguish and track cache lines with shared data, which are the same modifications required by [10]. 3) Mapping applications with shared data to the same core avoids data incoherence since these tasks share the same private cache.…”
Section: Exp2: Comparing Performance With Conventional Protocols mentioning
confidence: 99%
“…However, this solution requires special hardware performance counters and modifications to currently available scheduling techniques. A third solution suggests modifying the applications by marking instructions with shared data as critical sections such that they are accessed by only a single core at any time instance [10]. Although this allows caching of shared data, it stalls all tasks but one from accessing the data, which in worst case (WC) amounts to sequentially running the tasks.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation