2017 International Symposium on Computer Architecture and High Performance Computing Workshops (SBAC-PADW) 2017
DOI: 10.1109/sbac-padw.2017.13
|View full text |Cite
|
Sign up to set email alerts
|

Comparing Performance of C Compilers Optimizations on Different Multicore Architectures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 14 publications
(6 citation statements)
references
References 5 publications
0
6
0
Order By: Relevance
“…So, assessing the impact of these compilers on soft error reliability is vital to guarantee the success of their products. If, on the one hand, most of the work on compilation flags in the literature focuses on performance optimization [33], and into reducing memory usage and code size [34]. Few are those that assess the soft error reliability provided by compilers [15,17,19].…”
Section: Soft Error Consistency Assessment For Single‐core Processorsmentioning
confidence: 99%
“…So, assessing the impact of these compilers on soft error reliability is vital to guarantee the success of their products. If, on the one hand, most of the work on compilation flags in the literature focuses on performance optimization [33], and into reducing memory usage and code size [34]. Few are those that assess the soft error reliability provided by compilers [15,17,19].…”
Section: Soft Error Consistency Assessment For Single‐core Processorsmentioning
confidence: 99%
“…Traditional compiler optimizations define a well-studied area, and some issues are been analysed in the literature in the past. But, these works follow a different approach to the proposed in this paper: they study the compiler phase order [4], compare different compiler [5], optimize specific software [6,7], or evaluate specific flags for some hardware platforms [8].…”
Section: Problem Descriptionmentioning
confidence: 99%
“…On multi‐core processors, tasks can be divided among the cores. Various application program interfaces such as open multi‐processing (OpenMP) and threading building blocks (TBB) have so far been produced for code parallelisation in different programming languages [5, 6]. Another way to parallelise the classification is to use platforms like Compute Unified Device Architecture (CUDA) on many‐core platforms like graphics processing units (GPUs) [715].…”
Section: Introductionmentioning
confidence: 99%