1979
DOI: 10.1145/359060.359069
|View full text |Cite
|
Sign up to set email alerts
|

Global optimization by suppression of partial redundancies

Abstract: The elimination of redundant computations and the moving of invariant computations out of loops are often done separately, with invariants moved outward loop by loop. We propose to do both at once and to move each expression directly to the entrance of the outermost loop in which it is invariant. This is done by solving a more general problem, i.e. the elimination of computations performed twice on a given execution path. Such computations are termed partially redundant. Moreover, the algorithm does not requir… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
131
0

Year Published

1997
1997
2015
2015

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 293 publications
(131 citation statements)
references
References 9 publications
0
131
0
Order By: Relevance
“…Both of these characteristics hardly arise in practice. E.g., Morel and Renvoise report that they never observed more than 3 iterations in their experiments [20], while Dhamdhere reports a number of 5 [3,4]. Typical DFA-probtems requiring lattices with longer chains (than e.g.…”
Section: Practice: Empirical Evaluationmentioning
confidence: 97%
“…Both of these characteristics hardly arise in practice. E.g., Morel and Renvoise report that they never observed more than 3 iterations in their experiments [20], while Dhamdhere reports a number of 5 [3,4]. Typical DFA-probtems requiring lattices with longer chains (than e.g.…”
Section: Practice: Empirical Evaluationmentioning
confidence: 97%
“…The additional argument ant_exp is the set of expressions that are anticipable at the join point. We assume that the set of anticipable expressions are precomputed at every program point using the classical anticipability analysis [9,10]. T is the set of all program variables …”
Section: Improved Algorithmmentioning
confidence: 99%
“…Most of the efficient GVN algorithms that followed Kildall concentrate on detecting equalities among variables and hence, are limited in their ability to identify value-based redundancies [3,7]. Rosen et al [2] is an attempt at using GVN for partial redundancy elimination (PRE) [8][9][10]. Briggs et al [11] give a comparison of the different techniques for value numbering with respect to their use in different kinds of optimizations.…”
Section: Introductionmentioning
confidence: 99%
“…PRE problems come in two flavours: classic PRE and speculative PRE. Classic PRE, as described in the seminal work [22], inserts a computation at a point only if the point is safe (or down-safe) for the computation, i.e., only if the computation is fully anticipatable at the point. On the other hand, speculative PRE may insert a computation at a point even if the computation is partially but not necessarily fully anticipatable at the point.…”
Section: Introductionmentioning
confidence: 99%