Proceedings of the 18th ACM SIGPLAN International Conference on Functional Programming 2013
DOI: 10.1145/2500365.2500605
|View full text |Cite
|
Sign up to set email alerts
|

Automatic SIMD vectorization for Haskell

Abstract: Expressing algorithms using immutable arrays greatly simplifies the challenges of automatic SIMD vectorization, since several important classes of dependency violations cannot occur. The Haskell programming language provides libraries for programming with immutable arrays, and compiler support for optimizing them to eliminate the overhead of intermediate temporary arrays. We describe an implementation of automatic SIMD vectorization in a Haskell compiler which gives substantial vector speedups for a range of p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 17 publications
0
8
0
Order By: Relevance
“…A loopinvariant code motion pass does such non-speculative movement, when safe to do so. Finally, a SIMD vectorization pass is performed to attempt to create SIMD vector versions of inner loops [21]. A key advantage of performing vectorization in MIL is that the dependence analysis problem for immutable arrays is vastly more tractable than the generalized problem for mutable arrays.…”
Section: Optimizationsmentioning
confidence: 99%
See 3 more Smart Citations
“…A loopinvariant code motion pass does such non-speculative movement, when safe to do so. Finally, a SIMD vectorization pass is performed to attempt to create SIMD vector versions of inner loops [21]. A key advantage of performing vectorization in MIL is that the dependence analysis problem for immutable arrays is vastly more tractable than the generalized problem for mutable arrays.…”
Section: Optimizationsmentioning
confidence: 99%
“…The same flags are passed to our modified GHC and to the standard GHC except in some limited cases where a flag was beneficial to GHC but not to HRC. To eliminate SIMD vectorization as a factor in performance, HRC was run without enabling the vectorization [21] pass. HRC/FLRC supports compilation with both a strict floating point model in which only value-safe IEEE compliant reductions are performed and in which source level precision is maintained; and a relaxed model in which non value-safe floating point optimizations (such as re-association) are performed, and in which the underlying C compiler is allowed to compute results using more or less precision than specified by IEEE semantics.…”
Section: Performancementioning
confidence: 99%
See 2 more Smart Citations
“…Functional languages like Haskell, for example, inherently allow the parallel execution of functions [20], [22]. They benefit from their paradigm's property that (most) functions have no side-effects.…”
Section: Introductionmentioning
confidence: 99%