2003
DOI: 10.1007/978-3-540-24586-5_60
|View full text |Cite
|
Sign up to set email alerts
|

Two New Metrics for Feature Selection in Pattern Recognition

Abstract: Abstract. The purpose of this paper is to discuss about feature selection methods. We present two common feature selection approaches: statistical methods and artificial intelligence approach. Statistical methods are exposed as antecedents of classification methods with specific techniques for choice of variables because we pretend to try the feature selection techniques in classification problems. We show the artificial intelligence approaches from different points of view. We also present the use of the info… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2007
2007
2021
2021

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 12 publications
0
4
0
Order By: Relevance
“…To further validate the developed algorithms, we compared the classification results from this investigation with classic feature selection methods such as SVM-RFE (SVM-Recursive Feature Elimination) [34], ARCO ((Area Under the Curve (AUC) and Rank Correlation coefficient Optimization) [35], Relief [36] and mRMR (minimal redundancymaximal-relevance) [37] using our data. The mRMR method recorded the highest classification when the number of features/genes was 32, which recorded an accuracy of 83%.…”
Section: Comparative Evaluation and Validation Of Svm Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…To further validate the developed algorithms, we compared the classification results from this investigation with classic feature selection methods such as SVM-RFE (SVM-Recursive Feature Elimination) [34], ARCO ((Area Under the Curve (AUC) and Rank Correlation coefficient Optimization) [35], Relief [36] and mRMR (minimal redundancymaximal-relevance) [37] using our data. The mRMR method recorded the highest classification when the number of features/genes was 32, which recorded an accuracy of 83%.…”
Section: Comparative Evaluation and Validation Of Svm Resultsmentioning
confidence: 99%
“…For evaluation and comparison of the classification and misclassification performance of the four ML algorithms, we used 4 different scenarios in which any sample could end up or fall into: (a) true positive (TP) which means the sample was predicted as TNBC and was the correct prediction; (b) true negative (TN) which means the sample was predicted as non-TNBC and this was the correct prediction; (c) false positive (FP) which means the sample was predicted as TNBC, but was non-TNBC, and (d) false negative (FN) which means the sample was predicted as non-TNBC, but was TNBC. Using this information, we evaluated the classification results of the model by calculating the overall accuracy, To further validate the methods, the classification results were also compared with classic feature selection methods such as SVM-RFE [34], ARCO [35], Relief [36] and mRMR [37]. The SVM-REF relies on constructing feature ranking coefficients based on the weight vector generated by SVM during training.…”
Section: Modeling Prediction and Performance Evaluationmentioning
confidence: 99%
“…In this algorithm we use the terms R(A) and H(A) proposed in [20]. R(A) lies within [0,1] and stands for the relative importance of attribute A while H(A) represents heuristic information about a subset of candidate features.…”
Section: A Greedy Algorithm To Feature Selectionmentioning
confidence: 99%
“…In this algorithm we use the terms R(A) and H(A) proposed in [Piñ03]. The expression for R(A) which is a relevant measure of the attributes (0≤R(A)≤ 1) is:…”
Section: Feature Selection By Using An Evolutionary Approachmentioning
confidence: 99%