2021
DOI: 10.48550/arxiv.2101.04808
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

MLGO: a Machine Learning Guided Compiler Optimizations Framework

Abstract: Leveraging machine-learning (ML) techniques for compiler optimizations has been widely studied and explored in academia. However, the adoption of ML in general-purpose, industry strength compilers has yet to happen.We propose MLGO 1 , a framework for integrating ML techniques systematically in an industrial compiler -LLVM. As a case study, we present the details and results of replacing the heuristics-based inlining-for-size optimization in LLVM with machine learned models. To the best of our knowledge, this w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
21
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(21 citation statements)
references
References 22 publications
0
21
0
Order By: Relevance
“…Of late, there has been considerable interest in applying deep learning techniques to compilers in the areas of phase ordering [17], selection of optimization heuristics [10] and as part of optimization itself in register allocation [13] and inlining [28]. Machine learnt models have been used in optimization heuristics selection such as prediction of unroll factors [26], inlining decisions [25], vectorization [15], [22] etc.…”
Section: Related Workmentioning
confidence: 99%
“…Of late, there has been considerable interest in applying deep learning techniques to compilers in the areas of phase ordering [17], selection of optimization heuristics [10] and as part of optimization itself in register allocation [13] and inlining [28]. Machine learnt models have been used in optimization heuristics selection such as prediction of unroll factors [26], inlining decisions [25], vectorization [15], [22] etc.…”
Section: Related Workmentioning
confidence: 99%
“…Unlike SL algorithms which train on large static datasets, RL algorithms often dynamically collect data throughout training 2 . As discussed further in Section 7, select previous works approach compiler autotuning using DRL [18,19,20,21]. Here, states are derived from characteristics of program code, actions are applied code optimizations or heuristic settings, and rewards are derived from performance measurements.…”
Section: Reinforcement Learningmentioning
confidence: 99%
“…Previous works have begun to explore the use of reinforcement learning for compiler optimization [18,19,20,21,31,32]. Huang et al [19] optimize compiler HLS phase ordering using cycle count reduction as a reward signal to guide learning using a framework they call AutoPhase.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…There is a growing body of work that shows how the performance and portability of compiler optimizations can be improved through autotuning [1], machine learning [2], and reinforcement learning [3], [4], [5]. The goal of these approaches is to supplement or replace the optimization decisions made by hand-crafted heuristics with decisions derived from empirical data.…”
Section: Introductionmentioning
confidence: 99%