2019
DOI: 10.1007/s10664-019-09736-3
|View full text |Cite
|
Sign up to set email alerts
|

The impact of context metrics on just-in-time defect prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
27
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 45 publications
(27 citation statements)
references
References 50 publications
0
27
0
Order By: Relevance
“…This observation suggests that it might be relatively uninformative to learn from later life cycle data. This was an interesting finding since, as mentioned in the introduction, it is common practice in defect prediction to perform "recent validation" where predictors are tested on the latest release after training from the prior one or two releases [10], [13], [14]. In terms of Figure 2, that strategy would train on red dots taken near the right-hand-side, then test on the most right-hand-side dot.…”
Section: A Github Resultsmentioning
confidence: 90%
See 1 more Smart Citation
“…This observation suggests that it might be relatively uninformative to learn from later life cycle data. This was an interesting finding since, as mentioned in the introduction, it is common practice in defect prediction to perform "recent validation" where predictors are tested on the latest release after training from the prior one or two releases [10], [13], [14]. In terms of Figure 2, that strategy would train on red dots taken near the right-hand-side, then test on the most right-hand-side dot.…”
Section: A Github Resultsmentioning
confidence: 90%
“…"just in time", "software" AND " defect prediction" AND "sampling policy"). "Just in time (JIT)" defect prediction is a widely-used approach where the code seen in each commit is assessed for its defect prone-ness [13], [14], [38], [63].…”
Section: Sampling Policesmentioning
confidence: 99%
“…Summary of answers to RQ2: 178, 188, 189, 191, 193, 194, 196, 198, 201, 203, 204, 207ś213, 215, 218, 222, 223, 226, 227, 230, 232, 237, 239, 242, 248, 249, 253, 257, 259, 264, 271, 274, 276, 278ś280, 280, 282, 284, 285, 289, 290, 292, 294ś297, 299, 300, 303, 305, 306, 306, 307, 312, 319, 326, 330, 331, 336, 337] Area Under the Curve (AUC) The area under the receiver operating characteristics curve. Independent of the cutof value [15, 19, 29, 34, 43, 44, 46, 53, 56, [19,29,31,56,57,123,154,210,211,237,303,329,330] False positive rate(FPR) the ratio between the number of negative events wrongly categorized as positive and the total number of actual negative events + [32,152,163,163,164,198,221,…”
Section: Evaluation Metrics For Predictive Models In Classification Tasksmentioning
confidence: 99%
“…Just-in-time (JIT) defect prediction: Compared with traditional defect predictions at class or ile level, Justin-Time (JIT) defect prediction is of more practical value for participants, which aims to identify defect-inducing changes. Many studies focused on JIT defect prediction by employing SZZ approach [61,110,111,123,252]. In order to identify bug-introducing changes, SZZ irst detects the bug-ixing changes whose the change log contains bug identiier.…”
Section: Sotware Maintenancementioning
confidence: 99%
“…Normally, classifiers can be trained based on the dataset collected previously. The existing ML techniques have been facilitated many SDP approaches [30][31][32][33]. The classifier with the highest performance index is selected by evaluating the performance in terms of balance which reflects class imbalance.…”
mentioning
confidence: 99%