2011
DOI: 10.1007/s10664-011-9190-8
|View full text |Cite
|
Sign up to set email alerts
|

Can traditional fault prediction models be used for vulnerability prediction?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
108
1

Year Published

2016
2016
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 170 publications
(109 citation statements)
references
References 35 publications
0
108
1
Order By: Relevance
“…With respect to the software metrics approach, we found different results from all the other studies [8], [13], [6], i.e., better precision and significantly lower recall. This difference could be explained by the way we construct our datasets and/or by the fact that undersampling is used in these studies to balance the datasets which may have impacted the drawn result.…”
Section: B Differences With Previous Studiescontrasting
confidence: 87%
See 3 more Smart Citations
“…With respect to the software metrics approach, we found different results from all the other studies [8], [13], [6], i.e., better precision and significantly lower recall. This difference could be explained by the way we construct our datasets and/or by the fact that undersampling is used in these studies to balance the datasets which may have impacted the drawn result.…”
Section: B Differences With Previous Studiescontrasting
confidence: 87%
“…This is quite encouraging since it suggests that vulnerability prediction models can be useful and practical. As shown in Figures 4 and 5 the top performing prediction models achieve precision values of approximately 75% with recall of approximately 50%, which are judged by the studies of Morrison et al [10] and Shin et al [13] to be satisfactory. Interestingly we found a small influence of the imbalanced data (as shown by the differences between Figures 4 and 5) on our results.…”
Section: A Implicationsmentioning
confidence: 75%
See 2 more Smart Citations
“…However, not all of the vulnerabilities that a product contains are always reported, and therefore many components that are considered clean in the dataset may in fact be vulnerable. Moreover, the number of vulnerable files that a software product includes is often too small [35], leading to highly imbalanced datasets, which influence significantly the accuracy of the produced predictors [32]. The usage of a highly balanced and sound dataset is expected to improve the accuracy of the produced VPMs.…”
Section: Vulnerability Prediction Modelingmentioning
confidence: 99%