2012 Innovative Parallel Computing (InPar) 2012
DOI: 10.1109/inpar.2012.6339587
|View full text |Cite
|
Sign up to set email alerts
|

Machine learning for predictive auto-tuning with boosted regression trees

Abstract: The rapidly evolving landscape of multicore architectures makes the construction of efficient libraries a daunting task. A family of methods known collectively as "auto-tuning" has emerged to address this challenge. Two major approaches to auto-tuning are empirical and model-based: empirical autotuning is a generic but slow approach that works by measuring runtimes of candidate implementations, model-based auto-tuning predicts those runtimes using simplified abstractions designed by hand. We show that machine … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
51
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 60 publications
(53 citation statements)
references
References 35 publications
2
51
0
Order By: Relevance
“…The works by Bergstra et al, Jia et al, and Nugteren et al are those most closely related to ours. In Bergstra et al, a model based on boosted regression trees was used to build an auto‐tuner, evaluated with a single GPU benchmark, filterbank correlation.…”
Section: Related Worksupporting
confidence: 88%
See 1 more Smart Citation
“…The works by Bergstra et al, Jia et al, and Nugteren et al are those most closely related to ours. In Bergstra et al, a model based on boosted regression trees was used to build an auto‐tuner, evaluated with a single GPU benchmark, filterbank correlation.…”
Section: Related Worksupporting
confidence: 88%
“…The works by Bergstra et al, Jia et al, and Nugteren et al are those most closely related to ours. In Bergstra et al, a model based on boosted regression trees was used to build an auto‐tuner, evaluated with a single GPU benchmark, filterbank correlation. The Starchart system was used to build a regression tree model that could be used to partition the design space of an application, discover its structure, and find optimal parameter values within the different regions.…”
Section: Related Worksupporting
confidence: 88%
“…For the optimal designs, we selected the D‐Optimal Design , because it selects configurations according to an optimization criteria that fits best for our purpose (configurations with the largest distance to each other are selected). Last, we also use a random selection of variants, because this is a standard method in machine learning and often leads to good results …”
Section: Performance‐influence Modelsmentioning
confidence: 99%
“…There is increasing interest in the use of machine learning in automatic tuning of program performance, particularly on parallel architectures such as multi-cores and Graphics Processing Units (GPUs) [1,5,8,9,16,19]. Much of this work uses supervised learning methods that rely on training programs, i.e., programs to which various optimizations are applied and whose characteristics and performance are used to build a machine learning model.…”
Section: Introductionmentioning
confidence: 99%