2021
DOI: 10.1016/j.infsof.2021.106648
|View full text |Cite
|
Sign up to set email alerts
|

Code smell detection using feature selection and stacking ensemble: An empirical investigation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
51
4

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 44 publications
(55 citation statements)
references
References 41 publications
0
51
4
Order By: Relevance
“…Their results showed that SMOTE could not improve the God Class detection performance. Alazba et al Alazba and Aljamaan (2021) studied the effect of the stacking ensemble on six code smells, and their conclusions examined that the performance of Stacking with LR and SVM is better than all individual classifiers. Aljamaan investigated the performance of the voting ensemble on CSD.…”
Section: Imbalanced Learning For Csdmentioning
confidence: 99%
See 2 more Smart Citations
“…Their results showed that SMOTE could not improve the God Class detection performance. Alazba et al Alazba and Aljamaan (2021) studied the effect of the stacking ensemble on six code smells, and their conclusions examined that the performance of Stacking with LR and SVM is better than all individual classifiers. Aljamaan investigated the performance of the voting ensemble on CSD.…”
Section: Imbalanced Learning For Csdmentioning
confidence: 99%
“…These rules may be manually generated by domain experts Fontana et al (2016). However, both of these approaches can be time-consuming and cognitively demanding for software engineers Alazba and Aljamaan (2021), leading to a shift towards the use of machine learning approaches.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…FS is the process of identifying and removing the irrelevant and redundant features to improve the performance of the classifier. FS approaches can be divided into three main classes: wrapperbased methods, filter-based methods, and embedded methods [1], [29], [30].…”
Section: Data Pre-processing and Features Selectionmentioning
confidence: 99%
“…Software defect prediction models are mainly constructed by features, so the selection of features will directly affect the defect prediction results. The feature selection [37][38][39][40] affects the performance of defect prediction by influencing the accuracy and generalization of the machine learning models. When the dataset has only a small or redundant number of features, the machine model is unable to learn the general pattern, a situation that can lead to insufficient or overfitting.…”
Section: Introductionmentioning
confidence: 99%