2014 International Conference on High Performance Computing &Amp; Simulation (HPCS) 2014
DOI: 10.1109/hpcsim.2014.6903682
|View full text |Cite
|
Sign up to set email alerts
|

Development effort and performance trade-off in high-level parallel programming

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 24 publications
0
3
0
Order By: Relevance
“…Halstead metrics [39] and similar metrics are well-designed to evaluate the effort needed to write the same program in different ways. It already have been used by Légaux et al [40], and we plan to use such metrics to provide a comparison of PySke with other parallel programming libraries.…”
Section: Discussionmentioning
confidence: 99%
“…Halstead metrics [39] and similar metrics are well-designed to evaluate the effort needed to write the same program in different ways. It already have been used by Légaux et al [40], and we plan to use such metrics to provide a comparison of PySke with other parallel programming libraries.…”
Section: Discussionmentioning
confidence: 99%
“…PySke is related to SkeTo [13], [14], OSL [15], [16], and Muesli [17], [18], [19]. All these libraries are C++ libraries.…”
Section: Methodsmentioning
confidence: 99%
“…get_partition makes the distribution of the lists visible in the structure itself. For example, get_partition on the list of type PList in Figure 2 yields the PList [[0, 2, 4], [6,8], [10,12], [14,16]] (global view). Now each processor contains only one element (local size 1), but this element is a list.…”
Section: Distribution Changing Skeletonsmentioning
confidence: 99%