2015 IEEE/ACM International Symposium on Code Generation and Optimization (CGO) 2015
DOI: 10.1109/cgo.2015.7054203
|View full text |Cite
|
Sign up to set email alerts
|

HELIX-UP: Relaxing program semantics to unleash parallelization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
20
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 33 publications
(20 citation statements)
references
References 22 publications
0
20
0
Order By: Relevance
“…Many studies have introduced novel software techniques for approximation that reduce execution time and/or energy. The transformations include task skipping [Meng et al 2009[Meng et al , 2010Rinard 2006], loop perforation [Misailovic et al , 2010Sidiroglou-Douskos et al 2011], approximate function substitution [Ansel et al 2011;Baek and Chilimbi 2010;Zhu et al 2012], dynamic knobs (dynamically changing function version), reduction sampling [Goiri et al 2015;Zhu et al 2012], tuning floating-point operations [Rubio-González et al 2013;Schkufza et al 2014], and approximate parallelization [Campanoni et al 2015;]. These techniques have been shown to work well across a variety of application domains resilient to small errors.…”
Section: Hardware Sensitivitymentioning
confidence: 99%
“…Many studies have introduced novel software techniques for approximation that reduce execution time and/or energy. The transformations include task skipping [Meng et al 2009[Meng et al , 2010Rinard 2006], loop perforation [Misailovic et al , 2010Sidiroglou-Douskos et al 2011], approximate function substitution [Ansel et al 2011;Baek and Chilimbi 2010;Zhu et al 2012], dynamic knobs (dynamically changing function version), reduction sampling [Goiri et al 2015;Zhu et al 2012], tuning floating-point operations [Rubio-González et al 2013;Schkufza et al 2014], and approximate parallelization [Campanoni et al 2015;]. These techniques have been shown to work well across a variety of application domains resilient to small errors.…”
Section: Hardware Sensitivitymentioning
confidence: 99%
“…Other techniques reduce the number of computations to gain performance, e:g., loopperforation which removes some loop iterations based on profiling information [31]. Semantics relaxation can lead to better utilization of the capacity of the hardware, e:g., by extracting more parallelism [50]. Byna et al used approximation techniques on a sequential CPU algorithm to obtain a parallel algorithm with great performance on GPUs [51].…”
Section: Related Workmentioning
confidence: 99%
“…SAGE [8] is a GPU-oriented technique skipping or simplifying processing with respect to performance impact while we primarily focus on accuracy. HELIX-UP [5] ignores some dependences to enable code parallelization, which is complementary to our approach. Power savings is also studied by Misailovic et al [27] who provide a language interface to rely on hardware providing approximate instruction and memory storage that draws less power but may produce a wrong result at a given rate.…”
Section: Comparison Against Loop Perforationmentioning
confidence: 99%
“…Relaxed semantics models are possible, e.g., to support commutativity to enable vectorization [4], however all input code iterations are to be executed in the optimized code. On the other hand, more aggressive techniques to automatically compute approximations have been designed, e.g., by ignoring some dependencies to enable parallelization [5], by providing alternative implementations of some code parts [6], or by skipping computations [7], [8]. In our work, we investigate a new way, called Adaptive Code Refinement (ACR), inspired by Adaptive Mesh Refinement [9], a classical numerical analysis technique providing the ability to dynamically tune a computational grid to achieve precise computation only where it matters.…”
Section: Introductionmentioning
confidence: 99%