2013 8th IEEE International Symposium on Industrial Embedded Systems (SIES) 2013
DOI: 10.1109/sies.2013.6601483
|View full text |Cite
|
Sign up to set email alerts
|

Towards transparent parallel/distributed support for real-time embedded applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2014
2014
2015
2015

Publication Types

Select...
2
1

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 5 publications
0
4
0
Order By: Relevance
“…The PDRT library makes the workload distribution between a set of distributed nodes. The PDRT library implements a for loop with parallel distributed real-time behaviour as the one described in (Garibay-Martínez et al, 2013a). The PDRT distributes the load in a for loop by following a similar behaviour as the example in Figure 6.6, that is, it evenly distributes the iterations in the for loop between the nodes in the system (the DST algorithm is not used.).…”
Section: Comparing Experimental Results and Simulation Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The PDRT library makes the workload distribution between a set of distributed nodes. The PDRT library implements a for loop with parallel distributed real-time behaviour as the one described in (Garibay-Martínez et al, 2013a). The PDRT distributes the load in a for loop by following a similar behaviour as the example in Figure 6.6, that is, it evenly distributes the iterations in the for loop between the nodes in the system (the DST algorithm is not used.).…”
Section: Comparing Experimental Results and Simulation Resultsmentioning
confidence: 99%
“…Therefore, the MPI code is not seen by the programmer (the MPI code is implicitly called by the OpenMP library). The programmer only needs to specify which OpenMP code blocks to distribute by using the #pragma omp distributedParallel pragma, and specifying their deadlines , 2013a. This is illustrated in Algorithm 3.4, the for loop can be distributed among 3 threads and the computation must be completed before a deadline of 200 milliseconds.…”
Section: Supporting Parallel and Distributed Real-time Execution Withmentioning
confidence: 99%
“…To this end, we are investigating more refined models such as ''limited-preemptive'' scheduling solutions which reduce the cache-related overhead without affecting the overall schedulability [37,36] as well as more dynamic techniques which try to balance those same effects against the load (e.g. work-stealing approaches [35,23,43]). …”
Section: T the Task Dependency Graphmentioning
confidence: 99%
“…Some of the most popular programming models implementing the fork-join structure are the OpenMP programming model [7], and the Message Passing Interface (MPI) model [8]. However, none of these programming models is able to provide any time guarantees, although some efforts on bridging that gap have been presented in [9,10].…”
Section: Introductionmentioning
confidence: 99%