2005
DOI: 10.1109/tpami.2005.159
|View full text |Cite
|
Sign up to set email alerts
|

Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy

Abstract: Feature selection is an important problem for pattern classification systems. We study how to select good features according to the maximal statistical dependency criterion based on mutual information. Because of the difficulty in directly implementing the maximal dependency condition, we first derive an equivalent form, called minimal-redundancy-maximal-relevance criterion (mRMR), for first-order incremental feature selection. Then, we present a two-stage feature selection algorithm by combining mRMR and othe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

6
1,883
0
3

Year Published

2014
2014
2019
2019

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 8,320 publications
(1,892 citation statements)
references
References 19 publications
6
1,883
0
3
Order By: Relevance
“…It can be used for analyzing behavioral data, where many of the properties we have highlighted could be useful (e.g., CMI, interactions). It has been used for feature selection in general classification problems [Lefakis and Fleuret, 2014; Peng et al, 2005; Torkkola, 2003] and we hope GCMI would provide practical advantages in many such applications. We further suggest that the copula normalization could be used as a general preprocessing step that would convert any covariance‐based statistic or algorithm into a robust rank‐based version (e.g., common spatial patterns, canonical correlation analysis, linear/quadratic discriminant analysis).…”
Section: Discussionmentioning
confidence: 99%
“…It can be used for analyzing behavioral data, where many of the properties we have highlighted could be useful (e.g., CMI, interactions). It has been used for feature selection in general classification problems [Lefakis and Fleuret, 2014; Peng et al, 2005; Torkkola, 2003] and we hope GCMI would provide practical advantages in many such applications. We further suggest that the copula normalization could be used as a general preprocessing step that would convert any covariance‐based statistic or algorithm into a robust rank‐based version (e.g., common spatial patterns, canonical correlation analysis, linear/quadratic discriminant analysis).…”
Section: Discussionmentioning
confidence: 99%
“…The proposed ensemble of TQ 2 I summary statistics, specifically, CVPAI2(R: A ⇒ B ⊆ A × B), OA(OAMTRX = FrequencyCount(A × B)) and class-conditional probabilities(OAMTRX), is an original minimally dependent and maximally informative (mDMI) set (Si Liu, Hairong Liu, Latecki, Xu, & Lu, 2011; Peng, Long, & Ding, 2005) of outcome Q 2 Is (O-Q 2 Is), to be jointly maximized according to the Pareto formal analysis of multi-objective optimization problems (Boschetti, Flasse, & Brivio, 2004); refer to the Part 1, Chapter 1.…”
Section: Methodsmentioning
confidence: 99%
“…According to the GEO-CEOS Val guidelines (GEO-CEOS, 2010; GEO-CEOS WGCV, 2015), Val is the process of assessing, by independent means, the quality of an information processing system by means of an mDMI set (Si Liu et al, 2011; Peng et al, 2005) of community-agreed outcome and process (OP) Q 2 Is (OP- Q 2 Is), each one provided with a degree of uncertainty in measurement, ± δ, with δ ≥ 0%.…”
Section: Validation Sessionmentioning
confidence: 99%
See 1 more Smart Citation
“…Gene selection mainly has two merits (Peng et al, 2005;Saeys et al, 2007). First, it can reduce dramatically the number of genes used in classifying the disease and overcome the problem of the "curse of dimensionality".…”
Section: Introductionmentioning
confidence: 99%