2016
DOI: 10.1145/2954680.2872411
|View full text |Cite
|
Sign up to set email alerts
|

Architecture-Adaptive Code Variant Tuning

Abstract: Code variants represent alternative implementations of a computation, and are common in high-performance libraries and applications to facilitate selecting the most appropriate implementation for a specific execution context (target architecture and input dataset). Automating code variant selection typically relies on machine learning to construct a model during an offline learning phase that can be quickly queried at runtime once the execution context is known. In this paper, we define a new approach called a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 35 publications
0
3
0
Order By: Relevance
“…Online tuning is easier to implement than dynamic tuning (there is no runtime compilation), but it is not practical when the number of possible code variants is high. An examples of an online tuner is SOCRATES [17,33].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Online tuning is easier to implement than dynamic tuning (there is no runtime compilation), but it is not practical when the number of possible code variants is high. An examples of an online tuner is SOCRATES [17,33].…”
Section: Related Workmentioning
confidence: 99%
“…Machine learning on historical autotuning data can be used to decrease the number of tuning decisions performed during program compilation or execution. In [33], a dynamic selection from a very limited number of code variants is based on a model created from previous tuning runs. In [8], a single tuning parameter can be optimized at compilation time by a neural network trained in multiple trial runs.…”
Section: Related Workmentioning
confidence: 99%
“…However, no historical data from previous tuning runs on different hardware has been used. Data from previous runs are used for learning by Muralidharan et al 21 and Cummins et al 22 Those papers focus on dynamic selection from a very limited number of code variants, 21 or optimization of a single tuning parameter at compilation time 22 . We are focusing on the usage of historical data in more complex tuning spaces.…”
Section: Related Workmentioning
confidence: 99%