2021
DOI: 10.1016/j.asoc.2021.107870
|View full text |Cite
|
Sign up to set email alerts
|

Inter-release defect prediction with feature selection using temporal chunk-based learning: An empirical study

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
14
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 14 publications
(14 citation statements)
references
References 98 publications
0
14
0
Order By: Relevance
“…Many different machine learning algorithms have been used in building software fault-proneness prediction models. These include J48 (Moser et al, 2008;Kamei et al, 2010;Krishnan et al, 2013), Random Forest (RF) (Guo et al, 2004;Mahmood et al, 2018;Fiore et al, 2021;Gong et al, 2021), and combinations of several machine learning algorithms, e.g., OneR, J48, and Naïve Bayes (NB) in (Menzies et al, 2007), RF, NB, RPart, and SVM in (Bowes et al, 2018), J48, RF, NB, Logistic Regression (LR), PART, and G-Lasso in (Goseva-Popstojanova et al, 2019), and Decision Tree (DT), k-Nearest Neighbor (kNN), LR, NB, and RF in (Kabir et al, 2021). With recent advances in Deep Neural Networks (DNN), some software fault-proneness prediction studies used deep learning (Wang et al, 2016;Li et al, 2017;Pang et al, 2017;Zhou et al, 2019;Zhao et al, 2021).…”
Section: Related Workmentioning
confidence: 99%
See 4 more Smart Citations
“…Many different machine learning algorithms have been used in building software fault-proneness prediction models. These include J48 (Moser et al, 2008;Kamei et al, 2010;Krishnan et al, 2013), Random Forest (RF) (Guo et al, 2004;Mahmood et al, 2018;Fiore et al, 2021;Gong et al, 2021), and combinations of several machine learning algorithms, e.g., OneR, J48, and Naïve Bayes (NB) in (Menzies et al, 2007), RF, NB, RPart, and SVM in (Bowes et al, 2018), J48, RF, NB, Logistic Regression (LR), PART, and G-Lasso in (Goseva-Popstojanova et al, 2019), and Decision Tree (DT), k-Nearest Neighbor (kNN), LR, NB, and RF in (Kabir et al, 2021). With recent advances in Deep Neural Networks (DNN), some software fault-proneness prediction studies used deep learning (Wang et al, 2016;Li et al, 2017;Pang et al, 2017;Zhou et al, 2019;Zhao et al, 2021).…”
Section: Related Workmentioning
confidence: 99%
“…In general, the extracted software metrics can be static code metrics, change metrics, or social metrics. Static code metrics are collected from the software source code or binary code units (Koru and Liu, 2005;Menzies et al, 2007;Lessmann et al, 2008;Menzies et al, 2010;He et al, 2013;Ghotra et al, 2015;Bowes et al, 2018;Kabir et al, 2021). Change metrics, sometimes called process metrics, are collected from the projects' development history (i.e., commit logs) and bug tracking systems (Nagappan et al, 2010;Giger et al, 2011;Krishnan et al, 2011Krishnan et al, , 2013Goseva-Popstojanova et al, 2019).…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations