2010 22nd IEEE International Conference on Tools With Artificial Intelligence 2010
DOI: 10.1109/ictai.2010.27
|View full text |Cite
|
Sign up to set email alerts
|

Attribute Selection and Imbalanced Data: Problems in Software Defect Prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
66
1
6

Year Published

2016
2016
2021
2021

Publication Types

Select...
8
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 118 publications
(74 citation statements)
references
References 20 publications
1
66
1
6
Order By: Relevance
“…From our experience in this research, the problem of attributes selection was a key aspect to obtain satisfactory results, which was also confirmed by [16], [17], and [18]. With appropriate attributes identified, the better accuracy could be achieved for a smaller sets of attributes with a simple appropriate classifier.…”
Section: Resultssupporting
confidence: 56%
“…From our experience in this research, the problem of attributes selection was a key aspect to obtain satisfactory results, which was also confirmed by [16], [17], and [18]. With appropriate attributes identified, the better accuracy could be achieved for a smaller sets of attributes with a simple appropriate classifier.…”
Section: Resultssupporting
confidence: 56%
“…OSS just like the other undersampling techniques however, does not explicitly avoid the inclusion of outliers in the final training set. Also, the performance of OSS in most empirical studies, particularly in software defect prediction using the NASA MDP datasets where the presence of noise and repeated instances is evident, is not encouraging as it has been consistently outperformed by other techniques such as RUS and SMOTE [4], [25], and [26].…”
Section: Related Workmentioning
confidence: 99%
“…Four sampling methods were considered in their work, namely, random undersampling, random oversampling, synthetic minority over-sampling (SMOTE) and Wilson editing. Khoshgoftaar et al [22] presented a process involving a random undersampling technique for addressing uneven class distribution to select important attributes in software engineering. Gao et al [13] proposed a new technique, called SelectRUSBoost, which is a form of ensemble learning that incorporates random undersampling into feature selection to alleviate class imbalance.…”
Section: Feature Selection and Class Imbalancementioning
confidence: 99%