2013
DOI: 10.6028/jres.118.012
|View full text |Cite
|
Sign up to set email alerts
|

A Case Study of Performance Degradation Attributable to Run-Time Bounds Checks on C++ Vector Access

Abstract: Programmers routinely omit run-time safety checks from applications because they assume that these safety checks would degrade performance. The simplest example is the use of arrays or array-like data structures that do not enforce the constraint that indices must be within bounds. This report documents an attempt to measure the performance penalty incurred by two different implementations of bounds-checking in C and C++ using a simple benchmark and a desktop PC with a modern superscalar CPU. The benchmark con… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
10
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
5

Relationship

2
3

Authors

Journals

citations
Cited by 6 publications
(12 citation statements)
references
References 11 publications
2
10
0
Order By: Relevance
“…An extended example that applies both the Welch-Sattherthwaite formula and theŠidák inequality to software performance measurements can be found in Ref. [25].…”
Section: Potential Complicationsmentioning
confidence: 99%
See 1 more Smart Citation
“…An extended example that applies both the Welch-Sattherthwaite formula and theŠidák inequality to software performance measurements can be found in Ref. [25].…”
Section: Potential Complicationsmentioning
confidence: 99%
“…• Controlled variables -Dell Precision T5400 PC as used in [25], fixed CPU frequency Each run of the test program produced a value for both levels of N by dividing execution time between two functions, with the main program being overhead. The order of tests progressed upward through each level of self-time fragmentation before starting on the next of the 1000 iterations.…”
Section: Gprofmentioning
confidence: 99%
“…However, the Rust code only requires 6% (11%) more cycles per packet overall despite doing more work. Synthetic benchmarks can achieve an even lower overhead of bounds checking [18]. A modern superscalar out of order processor can effectively hide the overhead introduced by these safety checks: normal execution does not trigger bounds check violations, the processor is therefore able to correctly predict (branch mispredict rate is at 0.2% -0.3%) and speculatively execute the correct path.…”
Section: The Cost Of Safety Features In Rustmentioning
confidence: 99%
“…Mytkowicz et al [2009] observed that measurement bias is commonplace in published papers with experimental results while Flater and Guthrie [2013] noted that the earlier observation by Berry [1992] that statistical design of experiments for the analysis of computer system performance is not applied as often as it ought still seems true more than a decade later. This observation was supported by Vitek and Kalibera [2012] who stated that papers in proceedings of computer science conferences regularly appear without a comparison to the state of the art, without appropriate benchmarks, without any mention of limitations, and without sufficient detail to reproduce the experiments.…”
Section: Introductionmentioning
confidence: 99%
“…Differing compiler optimization options can drastically change a program's performance [Flater and Guthrie, 2013;Zaparanuks et al, 2009]. Worse yet, code elimination can produce degenerate results.…”
Section: Compiler Optimization Effectsmentioning
confidence: 99%