1994
DOI: 10.1145/197405.197406
|View full text |Cite
|
Sign up to set email alerts
|

Compiler transformations for high-performance computing

Abstract: In the last three decades a large number of compiler transformations for optimizing programs have been implemented. Most optimizations for uniprocessors reduce the number of instructions executed by the program using transformations based on the analysis of scalar quantities and data-flow techniques. In contrast, optimizations for high-performance superscalar, vector, and parallel processors maximize parallelism and memory locality with transformations that rely on tracking the properties of arrays using loop … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
357
0
11

Year Published

1996
1996
2012
2012

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 634 publications
(368 citation statements)
references
References 98 publications
0
357
0
11
Order By: Relevance
“…In highperformance computing, loop optimizations play a key role in automated parallelization for exploiting the parallelism capabilities of the hardware. Broader overview and more detailed descriptions of these techniques is available from a number of sources (e.g., [12,13,14]). …”
Section: Related Workmentioning
confidence: 99%
“…In highperformance computing, loop optimizations play a key role in automated parallelization for exploiting the parallelism capabilities of the hardware. Broader overview and more detailed descriptions of these techniques is available from a number of sources (e.g., [12,13,14]). …”
Section: Related Workmentioning
confidence: 99%
“…An approach for fast hot spot estimation/detection is reported in [1]; another method-time-complexity evaluation can be found in [17]. Sophisticated techniques for optimization profit estimation are described in [26] and [4].…”
Section: Background Compilationmentioning
confidence: 99%
“…The relatively clean semantics of these languages also makes it comparatively easy to use formal methods and prove the transformations performed by the parallelizing compiler both correct and efficient. 3 Quite significant progress has been made in the past decade in the área of automatic program parallelization for logic programs and some of the challenges have been tackled quite effectively. In the following touch upon a few of them (see, for example, [11] for an overview of the área).…”
Section: (B)mentioning
confidence: 99%
“…However, space limitations prevent us from considering these additional issues. 3 Functional programming is another paradigm which also facilitates exploitation of parallelism. However, it can be argued that the lack of certain features, such as pointers and backtracking, while making the parallelization problem easier, also precludes studying some interesting problems.…”
Section: (B)mentioning
confidence: 99%
See 1 more Smart Citation