2011
DOI: 10.1002/cpe.1842
|View full text |Cite
|
Sign up to set email alerts
|

Asynchronous adaptive optimisation for generic data‐parallel array programming

Abstract: SUMMARYProgramming productivity very much depends on the availability of basic building blocks that can be reused for a wide range of application scenarios and the ability to define rich abstraction hierarchies. Driven by the aim for increased reuse, such basic building blocks tend to become more and more generic in their specification; structural as well as behavioural properties are turned into parameters that are passed on to lower layers of abstraction where eventually a differentiation is being made. In t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
8
0

Year Published

2011
2011
2019
2019

Publication Types

Select...
3
2

Relationship

2
3

Authors

Journals

citations
Cited by 6 publications
(8 citation statements)
references
References 21 publications
0
8
0
Order By: Relevance
“…We are currently extending this approach to runtime adaptive codes [30]. The idea here is to recompile the program at runtime to adapt to a changing environment.…”
Section: Support For Robustness With Sacmentioning
confidence: 99%
“…We are currently extending this approach to runtime adaptive codes [30]. The idea here is to recompile the program at runtime to adapt to a changing environment.…”
Section: Support For Robustness With Sacmentioning
confidence: 99%
“…Our asynchronous adaptive specialization framework [7] builds on today's ubiquity of multi-core processor architectures. Asynchronous with the execution of generic code, be it sequential or automatically parallelized, a specialization controller generates an appropriately specialized and highly optimized binary clone of some generic function, all while the application continues running the original generic code.…”
Section: Introductionmentioning
confidence: 99%
“…In our original approach [7] specializations are accumulated during one execution of an application and are automatically removed upon the application's termination. Consequently, any subsequent run of the same application starts specializing again from scratch.…”
Section: Introductionmentioning
confidence: 99%
“…Various implementations of a component implementing the same operation are deployed together, and a runtime dispatcher can dynamically select the variant that is expected to perform best on a given problem. Dispatching is implemented using a static table computed at component deployment time and compressed using various techniques.The experiments show that this approach to adaptive optimization produces good performance results without explicit manual parallelization. Asynchronous adaptive optimisation for generic data‐parallel array programming by Clemens Grelck, Tom van Deurzen, Stephan Herhut, and Sven‐Bodo Scholz describes an aggressive adaptive optimizer for Single Assignment C ( SAC ), a data‐parallel array programming language. The SAC programming model encourages highly generic array programming, leaving the sizes and, in many cases, even the ranks (number of dimensions) of arrays unknown at compile time.…”
mentioning
confidence: 99%
“…Asynchronous adaptive optimisation for generic data‐parallel array programming by Clemens Grelck, Tom van Deurzen, Stephan Herhut, and Sven‐Bodo Scholz describes an aggressive adaptive optimizer for Single Assignment C ( SAC ), a data‐parallel array programming language. The SAC programming model encourages highly generic array programming, leaving the sizes and, in many cases, even the ranks (number of dimensions) of arrays unknown at compile time.…”
mentioning
confidence: 99%