2013
DOI: 10.1016/j.patcog.2013.04.021
|View full text |Cite
|
Sign up to set email alerts
|

Mutual information-based method for selecting informative feature sets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
42
0

Year Published

2014
2014
2018
2018

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 60 publications
(42 citation statements)
references
References 14 publications
0
42
0
Order By: Relevance
“…Similarly to some previous research [12,13,9], for each feature set size, we employed a linear support vector machine (with the regularization parameter set to 1) to obtain the 10-fold cross-validation error rate (or leave-one-out validation error if the data set contains less than 100 instances). Additionally, the same statistics [5] are also collected from two other classifiers, namely Naive Bayes (NB) and kNN classifier (k = 3).…”
Section: Experimental Evaluationmentioning
confidence: 99%
See 1 more Smart Citation
“…Similarly to some previous research [12,13,9], for each feature set size, we employed a linear support vector machine (with the regularization parameter set to 1) to obtain the 10-fold cross-validation error rate (or leave-one-out validation error if the data set contains less than 100 instances). Additionally, the same statistics [5] are also collected from two other classifiers, namely Naive Bayes (NB) and kNN classifier (k = 3).…”
Section: Experimental Evaluationmentioning
confidence: 99%
“…To determine which normalization form performs better overall, following Herman et al [13], the one-sided paired t-test at 5% significance level was used to compare Form-1 and Form-2 with the baseline Form-0. The experiment results of SVM are shown in Table 2 where we use '+'/'−'/'=' to indicate that Form-0 performs 'better'/'worse'/'equally well' compared to the two other forms.…”
Section: Normalizationmentioning
confidence: 99%
“…Therefore, it is important to consider interdependencies between features and remove those which are redundant. Herman et al [5] provide an in-depth review of redundant feature selection using MI. In their paper, a general framework is proposed for the several techniques in defining redundancy and combining it with relevancy.…”
Section: Redundant Feature Selectionmentioning
confidence: 99%
“…In their paper, a general framework is proposed for the several techniques in defining redundancy and combining it with relevancy. One such method is minimal redundancy maximum relevance (mRMR) [1,11], referred to as MI difference in [5]. In mRMR, the relevancy, Rel(F, C), of a feature set, F , is given by the mean MI of the member features and the class labels, C, namely,…”
Section: Redundant Feature Selectionmentioning
confidence: 99%
See 1 more Smart Citation