[1993] Proceedings Seventh International Parallel Processing Symposium
DOI: 10.1109/ipps.1993.262808
|View full text |Cite
|
Sign up to set email alerts
|

The data-parallel Ada run-time system, simulation and empirical results

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 3 publications
0
6
0
Order By: Relevance
“…The non-recursive subprogram case is perhaps the simplest, since there is only one call involved and thus there is no need for a parallelism strategy such as work-sharing, work-seeking, or work-stealing 8 , nor is there a need for reduction. In addition, there are no specific restrictions on the parameter profile of the subprogram, and the compiler writer can implement the calls without the need for library support 9 .…”
Section: Non-recursive Subprogramsmentioning
confidence: 99%
See 2 more Smart Citations
“…The non-recursive subprogram case is perhaps the simplest, since there is only one call involved and thus there is no need for a parallelism strategy such as work-sharing, work-seeking, or work-stealing 8 , nor is there a need for reduction. In addition, there are no specific restrictions on the parameter profile of the subprogram, and the compiler writer can implement the calls without the need for library support 9 .…”
Section: Non-recursive Subprogramsmentioning
confidence: 99%
“…Parallel programming in Ada was considered several years ago ( [8,9,10]). Mayer and Jahnichen [8] introduce a parallel keyword, which applies to for loops, allowing a specific compiler to optimize loop iterations, targeted to a multiprocessor platform.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…[13,14,15]) 2 . The work in [13] introduces a parallel keyword, for for loops, allowing a specific compiler to optimize loop iterations, targeted to a multiprocessor platform.…”
Section: Parallel Programming and Adamentioning
confidence: 99%
“…[13,14,15]) 2 . The work in [13] introduces a parallel keyword, for for loops, allowing a specific compiler to optimize loop iterations, targeted to a multiprocessor platform. The work in [14] is similar, as it also targets the optimization of parallel loops; furthermore, the authors state that Ada tasks are not the appropriate unit of parallelization thus proposing a concept of minitasks which can be optimized by compilers and runtimes aware of this model.…”
Section: Parallel Programming and Adamentioning
confidence: 99%