Proceedings of the 4th International Conference on Computing Frontiers 2007
DOI: 10.1145/1242531.1242553
|View full text |Cite
|
Sign up to set email alerts
|

Fast compiler optimisation evaluation using code-feature based performance prediction

Abstract: Performance tuning is an important and time consuming task which may have to be repeated for each new application and platform. Although iterative optimisation can automate this process, it still requires many executions of different versions of the program. As execution time is frequently the limiting factor in the number of versions or transformed programs that can be considered, what is needed is a mechanism that can automatically predict the performance of a modified program without actually having to run … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
38
0

Year Published

2007
2007
2020
2020

Publication Types

Select...
4
2
2

Relationship

2
6

Authors

Journals

citations
Cited by 61 publications
(38 citation statements)
references
References 29 publications
0
38
0
Order By: Relevance
“…In compiler research, the feature sets used for predictive models are often provided without explanation and rarely is the quality of those features evaluated. More commonly, an initial large, high dimensional candidate feature space is pruned via feature selection [3], or projected into a lower dimensional space [43,44]. FEAST employs a range of existing feature selection methods to select useful candidate features [45].…”
Section: Related Workmentioning
confidence: 99%
“…In compiler research, the feature sets used for predictive models are often provided without explanation and rarely is the quality of those features evaluated. More commonly, an initial large, high dimensional candidate feature space is pruned via feature selection [3], or projected into a lower dimensional space [43,44]. FEAST employs a range of existing feature selection methods to select useful candidate features [45].…”
Section: Related Workmentioning
confidence: 99%
“…Previous work [13], [8] has proposed to model the optimization problem by characterizing a program using performance counters. We use a prediction model originally proposed by Cavazos et al [12], [7], but slightly adapted to support polyhedral primitives instead. We refer to it as a speedup predictor model.…”
Section: B Speedup Prediction Modelmentioning
confidence: 99%
“…This model has been used in recent work [9,13,22] to predict good optimization sequences for various different compilers. We refer to this model as the speedup predictor because it takes as input a program's characterization (P) and an optimization sequence (O), and it outputs the predicted speedup over some baseline for that optimization sequence.…”
Section: Speedup Prediction Modelmentioning
confidence: 99%
“…Using a well-constructed machine learning model to choose optimizations for a specific program has repeatedly been shown to outperform the most aggressive optimization levels in open-source and commercial compilers [6,10,13,15,16,19,20,22,25,30]. However, to use machine learning effectively, it is critical to use expressive features that characterize programs well and that strongly correlate to beneficial optimization sequences for the target program.…”
Section: Introductionmentioning
confidence: 99%