Proceedings of the Second ACM-IEEE International Symposium on Empirical Software Engineering and Measurement 2008
DOI: 10.1145/1414004.1414013
|View full text |Cite
|
Sign up to set email alerts
|

On establishing a benchmark for evaluating static analysis alert prioritization and classification techniques

Abstract: Benchmarks provide an experimental basis for evaluating software engineering processes or techniques in an objective and repeatable manner. We present the FAULTBENCH benchmark, as a contribution to current benchmark materials, for evaluation and comparison of techniques that prioritize and classify alerts generated by static analysis tools. Alert prioritization and classification addresses the problem in many static analysis tools of numerous alerts that are not an indication of a fault or unimportant to the d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
91
0

Year Published

2009
2009
2022
2022

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 79 publications
(91 citation statements)
references
References 19 publications
0
91
0
Order By: Relevance
“…Our approach ranks warnings on a different aspect of source code than those they consider and could be used to complement their model. Finally, Heckman et al proposed Faultbench, a benchmark for comparison and evaluation of static analysis alert prioritization and classification techniques [18] and used it to validate the Aware [19] tool to prioritize static analysis tool warnings. Since results of our approach are promising, further research could investigate our approach against this additional benchmark.…”
Section: Related Workmentioning
confidence: 99%
“…Our approach ranks warnings on a different aspect of source code than those they consider and could be used to complement their model. Finally, Heckman et al proposed Faultbench, a benchmark for comparison and evaluation of static analysis alert prioritization and classification techniques [18] and used it to validate the Aware [19] tool to prioritize static analysis tool warnings. Since results of our approach are promising, further research could investigate our approach against this additional benchmark.…”
Section: Related Workmentioning
confidence: 99%
“…Zitser [6]. Heckman et al proposed a benchmark and procedures for the evaluation of software inspection prioritization and classification techniques [11]. Unfortunately, the benchmark is focused at Java programs.…”
Section: Related Workmentioning
confidence: 99%
“…We use three different classification algorithms to classify the feature vectors: decision tree (ADTree), naive Bayes and Bayesian network (BayesNet). The selection of these three classifiers is based on our experience classifying alerts from the FaultBench v0.1 [17,13] benchmark. For all classifiers we use the default parameters.…”
Section: Classificationmentioning
confidence: 99%
“…To do this we use the FaultBench v0.3 [16] method proposed by Heckman and Williams [13]. This technique uses the source code history of a project to determine if alerts are actionable or unactionable.…”
Section: Ground Truthmentioning
confidence: 99%