1992
DOI: 10.1007/bf00155803
|View full text |Cite
|
Sign up to set email alerts
|

An introduction to compilation issues for parallel machines

Abstract: The exploitation of today's high-performance computer systems requires the effective use of parallelism in many forms and at numerous levels. This survey article discusses program analysis and restructuring techniques that target parallel architectures. We first describe various categories of architectures that are oriented toward parallel computation models: vector architectures, shared-memory multiprocessors, massively parallel machines, message-passing architectures, VLIWs, and multithreaded architectures. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

1996
1996
2016
2016

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(3 citation statements)
references
References 71 publications
0
3
0
Order By: Relevance
“…A data dependence is induced by two adjacent accesses to a same memory variable, and at least one of them is a write [4,6]. For convenience, we call an execution of a statement as a statement instance.…”
Section: Data Dependence Analysismentioning
confidence: 99%
“…A data dependence is induced by two adjacent accesses to a same memory variable, and at least one of them is a write [4,6]. For convenience, we call an execution of a statement as a statement instance.…”
Section: Data Dependence Analysismentioning
confidence: 99%
“…This being the case, it very inefficient to pass frequent small messages between processors. Consequently, the compilation system should ensure that individual processors are assigned larger and relatively independent threads of control [20]. The parallelism is thus coarsegrained; often at the procedure or subtask level.…”
Section: Popular Paradigms For High-level Specification Of Parallelismmentioning
confidence: 99%
“…This being the case, it very inefficient t o pass frequent small messages between processors. Consequently, the compilation system should ensure that individual processors are assigned larger and relatively independent threads of control [20]. The parallelism is thus coarsegrained; often at the procedure or subtask level.…”
Section: Popular Paradigms For High-level Specification Of Parallelismmentioning
confidence: 99%