Proceedings of the Third ACM Haskell Symposium on Haskell 2010
DOI: 10.1145/1863523.1863540
|View full text |Cite
|
Sign up to set email alerts
|

Supercompilation by evaluation

Abstract: Supercompilation is a technique due to Turchin [1] which allows for the construction of program optimisers that are both simple and extremely powerful. Supercompilation is capable of achieving transformations such as deforestation [2], function specialisation and constructor specialisation [3]. Inspired by Mitchell's promising results [4], we show how the call-by-need supercompilation algorithm can be recast to be based explicitly on an evaluator, and in the process extend it to deal with recursive let express… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2011
2011
2017
2017

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 19 publications
(8 citation statements)
references
References 23 publications
0
8
0
Order By: Relevance
“…The backward mode (< and ) resembles weakest-precondition reasoning (Dijkstra 1975;Nori et al 2014), pre-image computation (Toronto, McCarthy, and Horn 2015), and constraint propagation (Saraswat, Rinard, and Panangaden 1991;Gupta, Jagadeesan, and Panangaden 1999). The forward mode ( > and ) resembles lazy evaluation (Launchbury 1993), in particular lazy partial evaluation (Jørgensen 1992;Fischer et al 2008;Mitchell 2010;Bolingbroke and Peyton Jones 2010). Our laziness postpones nondeterminism (Fischer, Kiselyov, and Shan 2011) in the measure monad, an extension of the probability monad (Giry 1982;Ramsey and Pfeffer 2002).…”
Section: Related Workmentioning
confidence: 99%
“…The backward mode (< and ) resembles weakest-precondition reasoning (Dijkstra 1975;Nori et al 2014), pre-image computation (Toronto, McCarthy, and Horn 2015), and constraint propagation (Saraswat, Rinard, and Panangaden 1991;Gupta, Jagadeesan, and Panangaden 1999). The forward mode ( > and ) resembles lazy evaluation (Launchbury 1993), in particular lazy partial evaluation (Jørgensen 1992;Fischer et al 2008;Mitchell 2010;Bolingbroke and Peyton Jones 2010). Our laziness postpones nondeterminism (Fischer, Kiselyov, and Shan 2011) in the measure monad, an extension of the probability monad (Giry 1982;Ramsey and Pfeffer 2002).…”
Section: Related Workmentioning
confidence: 99%
“…We suffer from the same problem that Bolingbroke and Peyton Jones [3] report: the supercompiler can sometimes prevent GHC from applying other important optimizations such as unboxing of arithmetics. Since that is a slightly different algorithm it is not exactly the same programs that get staggering increases in memory allocations.…”
Section: Measurementsmentioning
confidence: 99%
“…Avoiding name capture can cause significant churn and make the supercompiler dreadfully slow. Bolingbroke and Peyton Jones [3] report that for a particular example 42% of their supercompilation time is spent on managing names and renaming.…”
Section: Name Capturementioning
confidence: 99%
See 1 more Smart Citation
“…As mentioned in §5.2 we must manually unfold loops over regions and rectangles as GHC avoids inlining the definitions of recursive functions. The nice way to fix this would be some form of supercompilation [4,16]. Support for supercompilation in GHC is currently being developed, though still in an early stage.…”
Section: Manual Unwinding Of Recursive Functionsmentioning
confidence: 99%