2019
DOI: 10.1109/tifs.2019.2895963
|View full text |Cite
|
Sign up to set email alerts
|

Large-Scale Empirical Study of Important Features Indicative of Discovered Vulnerabilities to Assess Application Security

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
42
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 32 publications
(43 citation statements)
references
References 32 publications
1
42
0
Order By: Relevance
“…All these studies, which were based on different datasets, observed that a weak but statistically significant correlation exists between the software metrics and the existence of vulnerabilities, whereas they also produced metric-based vulnerability prediction models of satisfactory accuracy. In addition to this, recent studies have shown that the combination of different software metrics lead to better vulnerability predictors, and, thus, it may render a meaningful approach for enhancing security assessment Zhang et al, 2019).…”
Section: Existing Software Security Assessment Approachesmentioning
confidence: 99%
See 1 more Smart Citation
“…All these studies, which were based on different datasets, observed that a weak but statistically significant correlation exists between the software metrics and the existence of vulnerabilities, whereas they also produced metric-based vulnerability prediction models of satisfactory accuracy. In addition to this, recent studies have shown that the combination of different software metrics lead to better vulnerability predictors, and, thus, it may render a meaningful approach for enhancing security assessment Zhang et al, 2019).…”
Section: Existing Software Security Assessment Approachesmentioning
confidence: 99%
“…However, the reliability of these models is hindered since they are based exclusively on OO metrics, which were found to be only weak indicators of vulnerabilities (Shin & Williams, 2008;Chowdhury & Zulkernine, 2011;Shin et al, 2011;Siavvas et al, 2017;Siavvas et al, 2017b;Moshtari et al, 2013;Moshtari & Sami, 2016;Stuckman et al, 2017;Ferenc et al, 2019;Jimenez et al, 2019;Zhang et al, 2019). In addition, their parameters (i.e., thresholds, weights, etc.)…”
Section: Existing Software Security Assessment Approachesmentioning
confidence: 99%
“…The classification approach is the preferable one in the Vulnerability Prediction (VP) domain. SVPs can be based on different types of features: Software Metrics (SM) [2,29,30], Text Mining (TM) [36,[41][42][43] features, ASA alerts [27,44], and hybrid ones [45][46][47]. To create SVPs, different algorithms are used: decision trees [43,45], random forests [43,48,49], boosted trees [45], Support Vector Machines (SVM) [50], linear discriminant analysis [2], Bayesian Networks [2], linear regression [45], the naive Bayes classifier [41], K-nearest neighbors [43], as well as artificial neural networks and deep learning [36,47,51,52].…”
Section: Related Workmentioning
confidence: 99%
“…A large number of VPMs has been proposed in the literature over the past decade [ 3 ]. As stated in [ 14 ], the main VPMs that can be found in the literature utilize software metrics [ 10 , 17 , 18 ], text mining [ 9 , 15 ], and security-related static analysis alerts [ 12 , 13 ] to predict vulnerabilities in software products. However, as stated before, while these models have demonstrated promising results in predicting the existence of vulnerabilities in the software projects on which they have been built (i.e., within-project vulnerability prediction), they have failed to demonstrate sufficient performance in cross-project vulnerability prediction.…”
Section: Related Workmentioning
confidence: 99%
“…Several VPMs have been proposed over the years utilizing various software factors as inputs for predicting the existence of vulnerable components, including software metrics [ 10 ], text mining [ 9 , 11 ], and static analysis alerts [ 12 , 13 ]. Although these models have demonstrated promising results in predicting the existence of vulnerabilities in the projects on which they have been trained (i.e., within-project vulnerability prediction), they failed to sufficiently predict the existence of vulnerabilities in previously unknown software projects (i.e., cross-project vulnerability prediction) [ 3 , 14 ].…”
Section: Introductionmentioning
confidence: 99%