2018
DOI: 10.1109/tse.2017.2724538
|View full text |Cite
|
Sign up to set email alerts
|

A Comparative Study to Benchmark Cross-Project Defect Prediction Approaches

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
144
0
2

Year Published

2019
2019
2023
2023

Publication Types

Select...
7

Relationship

2
5

Authors

Journals

citations
Cited by 181 publications
(147 citation statements)
references
References 38 publications
1
144
0
2
Order By: Relevance
“…This led to a wrong determination of ranks, which explain the inconsistencies found by Y. Zhou. 1. For simplicity, we refer to the studentized range distribution as qtukey(α, N ), following the name of the related method in R…”
Section: Z-values Instead Of Ranksmentioning
confidence: 99%
See 1 more Smart Citation
“…This led to a wrong determination of ranks, which explain the inconsistencies found by Y. Zhou. 1. For simplicity, we refer to the studentized range distribution as qtukey(α, N ), following the name of the related method in R…”
Section: Z-values Instead Of Ranksmentioning
confidence: 99%
“…Unfortunately, the article "A Comparative Study to Benchmark Cross-project Defect Prediction Approaches" [1] has a problem in the statistical analysis performed to rank Cross-Project Defect Prediction (CPDP) approaches. Prof. Yuming Zhou from Nanjing University pointed out an inconsistency in Table 8 of the the article.…”
Section: Introductionmentioning
confidence: 99%
“…Since the target software projects usually lack the labeled modules, a possible solution is to use other historical projects with labeled modules to train the prediction models. This issue is called the CPDP . However, the dataset distribution of the target and source projects is usually different, which makes CPDP a challenging task.…”
Section: Related Workmentioning
confidence: 99%
“…R ESEARCH regarding software defect prediction for the accurate prediction of post-release defects in software is an ongoing and still unresolved research topic, that was already discussed in hundreds of publications [1], [2], [3]. Current research focuses on problems like cross-project defect prediction (e.g., [4]), heterogeneous defect prediction (e.g., [5], [6]), unsupervised defect prediction (e.g., [7], [8]), and just-in-time defect prediction (e.g., [9]). Additionally, researchers have turned their attention to how defect prediction research should be conducted, e.g., reducing the bias through sampling approaches [10], the impact of hyper parameter tuning [11], suitable baseline comparisons [12] or general guidelines that should be considered [13].…”
Section: Introductionmentioning
confidence: 99%
“…Additionally, researchers have turned their attention to how defect prediction research should be conducted, e.g., reducing the bias through sampling approaches [10], the impact of hyper parameter tuning [11], suitable baseline comparisons [12] or general guidelines that should be considered [13]. While all of the above contribute to the advancement of the defect prediction state of the art, there are also multiple publications that question the progress of the state of the art through replications in recent years, as they demonstrate that older (e.g., [4]) or trivial (e.g., [14], [15]) approaches are comparable too or even better than more complex recent approaches from the state of the art.…”
Section: Introductionmentioning
confidence: 99%