2009 30th IEEE Real-Time Systems Symposium 2009
DOI: 10.1109/rtss.2009.32
|View full text |Cite
|
Sign up to set email alerts
|

Timing Analysis of Concurrent Programs Running on Shared Cache Multi-Cores

Abstract: Abstract-Memory accesses form an important source of timing unpredictability. Timing analysis of real-time embedded software thus requires bounding the time for memory accesses. Multiprocessing, a popular approach for performance enhancement, opens up the opportunity for concurrent execution. However due to contention for any shared memory by different processing cores, memory access behavior becomes more unpredictable, and hence harder to analyze. In this paper, we develop a timing analysis method for concurr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
55
0

Year Published

2013
2013
2019
2019

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 104 publications
(55 citation statements)
references
References 19 publications
0
55
0
Order By: Relevance
“…In [20], a dual-core processor with a shared L2 cache model is considered. In [13], task lifetime information is computed and utilized to refine possible interferences. In [7], a method for identifying and bypassing the static single usage memory blocks so as to reduce the number of interferences is proposed.…”
Section: Discussionmentioning
confidence: 99%
“…In [20], a dual-core processor with a shared L2 cache model is considered. In [13], task lifetime information is computed and utilized to refine possible interferences. In [7], a method for identifying and bypassing the static single usage memory blocks so as to reduce the number of interferences is proposed.…”
Section: Discussionmentioning
confidence: 99%
“…all accesses are hits, and (ii) data references from different threads will not interfere with each other in the shared L2 cache. Li et al (2009a) proposed a method to estimate the worst-case response time of concurrent programs running on multicores with shared L2 caches, assuming setassociative instruction caches using the LRU replacement policy. Their work was later extended by Chattopadhyay et al (2010) by adding a TDMA bus analysis technique to bound the memory access delay.…”
Section: Related Work With a Focus On Shared Caches And Scratchpadsmentioning
confidence: 99%
“…Among them, the approach proposed by Li et al (2009b) analyzes the worst-case cache access scenario of parallelized applications modeled by Message Sequence Graphs. The approach suffers from a very high time-complexity and assumes that the cache access behaviors are known and finite.…”
Section: Related Work Assuming Different Application Modelsmentioning
confidence: 99%
“…One solution to this predicament might be to use an analysis approach that, for all the tasks that may run in parallel, studies statically, at a very fine grain of detail on an abstract model of the processor, the accesses that they may make to hardware shared resources and how they might contend with one another [18], [16]. This technique may lead to determine ETB that are considerably tighter than what we can arrive at, but at the cost of a much more onerous and complex effort, which trades time composability for tightness, since its results can only apply to a given task configuration that cannot be varied without invalidating the analysis results.…”
Section: Related Workmentioning
confidence: 99%