2012
DOI: 10.1145/2345156.2254075
|View full text |Cite
|
Sign up to set email alerts
|

Understanding and detecting real-world performance bugs

Abstract: Developers frequently use inefficient code sequences that could be fixed by simple patches. These inefficient code sequences can cause significant performance degradation and resource waste, referred to as performance bugs. Meager increases in single threaded performance in the multi-core era and increasing emphasis on energy efficiency call for more effort in tackling performance bugs. This paper conducts a comprehensive study of 109 real-world performance bugs that are randomly sampled from five representati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
161
1

Year Published

2013
2013
2024
2024

Publication Types

Select...
6
2
1
1

Relationship

1
9

Authors

Journals

citations
Cited by 145 publications
(170 citation statements)
references
References 52 publications
1
161
1
Order By: Relevance
“…Performance problems are often related to expensive loops [23]. As described in Section 4.3.2, in FDD we predict the outcome of changes in code by utilizing existing data to infer feedback for new software artifacts.…”
Section: Critical Loop Predictionmentioning
confidence: 99%
“…Performance problems are often related to expensive loops [23]. As described in Section 4.3.2, in FDD we predict the outcome of changes in code by utilizing existing data to infer feedback for new software artifacts.…”
Section: Critical Loop Predictionmentioning
confidence: 99%
“…Different categories could be rated with 5 choices ranging from Never to Very often. The categories have been chosen based on existing surveys from related domains [7,5,13,8]. The results in Figure 2c indicate that algorithmic faults are the most frequent cause for failures, followed by resource leaks (not limited to memory) and skippable computations.…”
Section: Induced Faultsmentioning
confidence: 99%
“…through crashing important components, but instead slowly degrade its perceived or computational performance and often only occur on special inputs. Hence, they are much harder to detect and easily missed during short testing cycles in active development work [9]. Also, these non-catastrophic issues have a higher potential for being recovered at runtime and therefore are the most valuable ones to detect with FDI methods.…”
Section: Target Systems and Assumptionsmentioning
confidence: 99%