1995
DOI: 10.1109/2.467577
|View full text |Cite
|
Sign up to set email alerts
|

The Paradigm compiler for distributed-memory multicomputers

Abstract: M assively parallel distributed-memory multicomputers can achieve the high performance levels required to solve the Grand Challenge computational science problems (a class of computational applications, identified by the 1992 US Presidential Initiative in High-Performance Computing and Communications, that would require a significant increase in computing power). Multicomputers such as the Intel Paragon, the IBM SP-l/SP-2 (Scalable PowerParallel 1 and 2) and the Thinking Machines CM-5 (Connection Machine 5) of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
70
0

Year Published

1996
1996
2006
2006

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 148 publications
(70 citation statements)
references
References 10 publications
0
70
0
Order By: Relevance
“…This is mainly due to: (1) inter- 6 We compute the energy consumption as follows: 30% of the available memory slots are used; (2) we do not use power hungry multi-port memories. Consequently, we cannot schedule operations that access the same data in parallel.…”
Section: Access Conflicts Reduce the System's Performancementioning
confidence: 99%
See 1 more Smart Citation
“…This is mainly due to: (1) inter- 6 We compute the energy consumption as follows: 30% of the available memory slots are used; (2) we do not use power hungry multi-port memories. Consequently, we cannot schedule operations that access the same data in parallel.…”
Section: Access Conflicts Reduce the System's Performancementioning
confidence: 99%
“…A large body of research exists in the high-performance computing domain on parallelizing applications while reducing the communication cost (e.g., the SUIF-project [19] and the Paradigm compiler [6]). However, they target an architecture which is very different from ours.…”
Section: Memory Optimization In Multi-threaded Applicationsmentioning
confidence: 99%
“…Current compiler technology [7][8][9][10][11] can efficiently automate the introduction of OpenMP directives to regular loops that iterate over random-access arrays as defined by Fortran or C. However, because most C++ programs, including many scientific applications, use higher-level abstractions for which semantics are unknown to the compiler, these abstractions are left unoptimized by most parallelizing compilers. By providing mechanisms to optimize object-oriented library abstractions, we thus allow the efficient tailoring of the programming environment as essentially a programming language that is more domain-specific than a general purpose language could allow, thereby allowing the improvement of programmer productivity without degrading application performance.…”
Section: Parallelizing User-defined Containers Using Openmpmentioning
confidence: 99%
“…the SUIF project [40 -42] and the Paradign compiler [43,44]). However, they target an architecture which is very different from ours.…”
Section: Memory Optimisation In Multi-threaded Applicationsmentioning
confidence: 99%