1999
DOI: 10.1002/(sici)1096-9128(199910)11:12<701::aid-cpe443>3.0.co;2-p
|View full text |Cite
|
Sign up to set email alerts
|

Engineering parallel symbolic programs in GPH

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2000
2000
2013
2013

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 28 publications
(15 citation statements)
references
References 47 publications
0
15
0
Order By: Relevance
“…We follow guidelines established in [31,Sec 3] with some flexibility. The approach we use is as follows:…”
Section: Parallelisation Methodologymentioning
confidence: 96%
“…We follow guidelines established in [31,Sec 3] with some flexibility. The approach we use is as follows:…”
Section: Parallelisation Methodologymentioning
confidence: 96%
“…Here, the programmer does not need to adapt the program to different computational GRID architectures and only needs to structure the sumTotient function appropriately and add the architecture-neutral evaluation strategy at the last line of the function. A thorough account of how one can engineer efficient parallel programs in GPH is given in [39]. The cost of providing the programmer with such a highlevel abstraction is that GPH requires an elaborate RTE to dynamically manage parallel execution on complex architectures, and these are described next.…”
Section: Glasgow Parallel Haskell (Gph)mentioning
confidence: 99%
“…The GUM implementation of GPH delivers good performance for a range of parallel benchmark applications on a variety of parallel architectures, including shared-memory and distributed-memory architectures [39]. GUM's performance is also comparable with other mature parallel functional languages [7].…”
Section: Gum Performancementioning
confidence: 99%
“…The GUM implementation delivers good performance for a range of parallel benchmark applications written using the GPH parallel dialect of Haskell. These benchmarks have been tested on a variety of parallel architectures, including shared and distributed-memory architectures [26], and a wide-area computational Grid [3].…”
Section: Figure 3 Load Distribution In Gummentioning
confidence: 99%