2014
DOI: 10.1109/tse.2014.2354037
|View full text |Cite
|
Sign up to set email alerts
|

An Empirical Methodology to Evaluate Vulnerability Discovery Models

Abstract: Vulnerability discovery models (VDMs) operate on known vulnerability data to estimate the total number of vulnerabilities that will be reported after a software is released. VDMs have been proposed by industry and academia, but there has been no systematic independent evaluation by researchers who are not model proponents. Moreover, the traditional evaluation methodology has some issues that biased previous studies in the field. In this work we propose an empirical methodology that systematically evaluates the… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
34
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
4
3

Relationship

3
4

Authors

Journals

citations
Cited by 39 publications
(34 citation statements)
references
References 33 publications
0
34
0
Order By: Relevance
“…When a vulnerability is fixed by only adding lines of code, there will be no evidence to track, and the authors in [20] conservatively assume that in such cases the whole version prior the fix (namely, code(r 0 )) is vulnerable. This screening test was appropriate for the empirical analysis of Vulnerability Discovery Models [39], which are typically based on the NVD and its cautious assumption "r 0 is vulnerable and so are all its previous versions" (see [20]), as this would create a consistent approximation of the NVD. Essentially, the overall approach can be seen as an instance of the generic screening test that we defined in Algorithm 1.…”
Section: Deletion Screening Criterionmentioning
confidence: 99%
“…When a vulnerability is fixed by only adding lines of code, there will be no evidence to track, and the authors in [20] conservatively assume that in such cases the whole version prior the fix (namely, code(r 0 )) is vulnerable. This screening test was appropriate for the empirical analysis of Vulnerability Discovery Models [39], which are typically based on the NVD and its cautious assumption "r 0 is vulnerable and so are all its previous versions" (see [20]), as this would create a consistent approximation of the NVD. Essentially, the overall approach can be seen as an instance of the generic screening test that we defined in Algorithm 1.…”
Section: Deletion Screening Criterionmentioning
confidence: 99%
“…The simplest metric is time (since release), and the corresponding model is a Vulnerability Discovery Model. Massacci and Nguyen [14] provide a comprehensive survey and independent empirical validation of several vulnerability discovery models. Several other metrics have been used: code complexity metrics [25,24,16], developer activity metrics [24], static analysis defect densities [27], frequencies of occurrence of programming constructs [21,28], etc.…”
Section: Related Workmentioning
confidence: 99%
“…2 As a rule rather than an exception, the security patches from both Red Hat and Microsoft have covered multiple vulnerabilities. Although these counting issues are known to affect estimates (Massacci and Nguyen, 2014), the use of security advisories does not invalidate the theoretical premises as such; regardless whether the known vulnerabilities are observed individually or in groups, the resulting trends should follow a sigmoidal growth trend.…”
Section: Datamentioning
confidence: 99%
“…The literature contains an abundant amount of sigmoid functions for S-shaped growth patterns (Höök et al, 2011;López et al, 2004;Massacci and Nguyen, 2014;Meade and Islam, 2006;Wang et al, 2014;Zwietering et al, 1990). A classical example is the famous function that Gompertz (1825) formulated to determine the rate of mortality.…”
Section: Growth Curvesmentioning
confidence: 99%
See 1 more Smart Citation