2011
DOI: 10.7763/ijiet.2011.v1.65
|View full text |Cite
|
Sign up to set email alerts
|

Modified-MCA Based Feature Selection Model for Preprocessing Step of Classification

Abstract: Feature subset selection is a technique for reducing the attribute space of a feature set. In other words, it is identifying a subset of features by removing irrelevant or redundant features. A good feature set that contains highly correlated features with the class improves not only the efficiency of the classification algorithms but also the classification accuracy. A novel metric that integrates the correlation and reliability information between each feature and each class obtained from multiple correspond… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
3
0

Year Published

2018
2018
2018
2018

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 13 publications
0
3
0
Order By: Relevance
“…In the literature, many existing feature selection methods can be classified into two categories: univariate and multivariate [50]. Univariate methods, such as information gain and chi-square measure [50,54], consider the effect of each feature on a class separately without considering the inter-dependence among features.…”
Section: Feature Selectionmentioning
confidence: 99%
See 2 more Smart Citations
“…In the literature, many existing feature selection methods can be classified into two categories: univariate and multivariate [50]. Univariate methods, such as information gain and chi-square measure [50,54], consider the effect of each feature on a class separately without considering the inter-dependence among features.…”
Section: Feature Selectionmentioning
confidence: 99%
“…To address this issue, research efforts have been directed towards various essential aspects like feature selection [48,49,50,21], training data selection [51,15], and classifier selection/fusion [52,53]. Among them, feature selection is considered especially applicable in big data analysis because it eliminates features with little predictive information, which also reduces the dimensionality of data and allows the learning algorithms to operate faster and more effectively [50]. In addition, research shows that a well designed feature selection method can not only handle high-dimensional data sets, but also successfully enhance classification performance in coping with imbalanced data [49,21].…”
Section: Feature Selectionmentioning
confidence: 99%
See 1 more Smart Citation