2020
DOI: 10.1007/s11219-019-09490-1
|View full text |Cite
|
Sign up to set email alerts
|

The effect of Bellwether analysis on software vulnerability severity prediction models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
19
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 24 publications
(19 citation statements)
references
References 61 publications
0
19
0
Order By: Relevance
“…Classifying Vulnerability Categories: This topic includes publications that construct models to classify the categories of vulnerabilities. For example, P39 [5] classified vulnerabilities based on the predicted severity level of the vulnerabilities using ML models on historical vulnerability data.…”
Section: --P7 [58] Proposed a Novel Approach Called 'Ltrwes'mentioning
confidence: 99%
See 2 more Smart Citations
“…Classifying Vulnerability Categories: This topic includes publications that construct models to classify the categories of vulnerabilities. For example, P39 [5] classified vulnerabilities based on the predicted severity level of the vulnerabilities using ML models on historical vulnerability data.…”
Section: --P7 [58] Proposed a Novel Approach Called 'Ltrwes'mentioning
confidence: 99%
“…P28 [62] used word embeddings and a onelayer shallow convolutional neural network (CNN) to automatically capture discriminative words and sentence features of bug report descriptions. P39 [5] presented another framework for vulnerability severity classification using the Bellwether analysis (i.e., exemplary data) [63]. They applied the NLP techniques on bug report descriptions.…”
Section: --P7 [58] Proposed a Novel Approach Called 'Ltrwes'mentioning
confidence: 99%
See 1 more Smart Citation
“…In such a skewed learning environment the testing sample may also contain the same distribution as the training sample. However, many of the existing bug severity prediction models trained from history data are nether mention the distribution of the datasets used to train the models, nor acknowledged this issue [7,8,9,10,11,12,13,14]. This implies that they assumed the underlying data distribution is balanced, which may not essentially be true in all real datasets such as I use in this project.…”
Section: Introductionmentioning
confidence: 98%
“…Hence, the objective of this project is to develop bug severity prediction models from imbalanced learning environments and test them properly using an appropriate metric. Many models developed in the literature [7,8,9,10,11,12,13,14] used the accuracy--correctly classified instances vs. the total number of instances presented--to evaluate the prediction quality in skewed learning environments [4,5]. However, the accuracy would not provide fair judgment on the quality of models trained from unevenly distributed datasets like used in this project.…”
Section: Introductionmentioning
confidence: 99%