Proceedings of the 1993 ACM/IEEE Conference on Supercomputing - Supercomputing '93 1993
DOI: 10.1145/169627.169802
|View full text |Cite
|
Sign up to set email alerts
|

Implementing a parallel C++ runtime system for scalable parallel systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

1994
1994
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 27 publications
(3 citation statements)
references
References 6 publications
0
3
0
Order By: Relevance
“…The work by Michael L. Nelson et al [7] is further enhanced by Gregory V. Wilson et al [8] by adopting the built-in library methods in C++ and also by E. Arjomandi et al [9] using the default inheritance strategies of modern object-oriented programming languages. Considering only the OO development strategies for making the code parallelization a good number of research attempts were made as A. Krishnamurthy et al [10] using the split-C method, P. A. Buhr et al [11] and Xining Li et al [12] using the default concurrency control by the OO programming languages, S. Shelly et al [13] using the inter-process communication between objects, F. Bodin et al [14] by deploying customization to the runtime. Also, many of the parallel research outcomes have demonstrated the use of a newer programming language to take the maximum advantage of the GPU as shown by Matthew Fluet et al [15].…”
Section: Outcomes From the Parallel Researchesmentioning
confidence: 99%
“…The work by Michael L. Nelson et al [7] is further enhanced by Gregory V. Wilson et al [8] by adopting the built-in library methods in C++ and also by E. Arjomandi et al [9] using the default inheritance strategies of modern object-oriented programming languages. Considering only the OO development strategies for making the code parallelization a good number of research attempts were made as A. Krishnamurthy et al [10] using the split-C method, P. A. Buhr et al [11] and Xining Li et al [12] using the default concurrency control by the OO programming languages, S. Shelly et al [13] using the inter-process communication between objects, F. Bodin et al [14] by deploying customization to the runtime. Also, many of the parallel research outcomes have demonstrated the use of a newer programming language to take the maximum advantage of the GPU as shown by Matthew Fluet et al [15].…”
Section: Outcomes From the Parallel Researchesmentioning
confidence: 99%
“…A number of such languages were developed in the late 1980s and early 1990s, including Fortran D [356,475], Vienna Fortran [188,1018], CM Fortran [921], C* [432], data-parallel C, and PC++ [641]. These research efforts were the precursors of informal standardization activities leading to High Performance Fortran (HPF) [468].…”
Section: Data-parallel Programming In High Performance Fortranmentioning
confidence: 99%
“…Roughly speaking, there are two major directions in those efforts. In the Fortran world, HPF 11 21, Fortran D 11 31, Vienna Fortran [6] and others are developed, in the C world, HPC 1211, ICC++ [7], MPC++ [14], pC++ [16], EC++ [20] and others are in progress.…”
Section: Comparisons and Conclusionmentioning
confidence: 99%