2016 2nd International Conference of Signal Processing and Intelligent Systems (ICSPIS) 2016
DOI: 10.1109/icspis.2016.7869891
|View full text |Cite
|
Sign up to set email alerts
|

Combined mRMR filter and sparse Bayesian classifier for analysis of gene expression data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 12 publications
0
5
0
Order By: Relevance
“…The parallelized mRMR method helps in shortens the computation time compared to the conventional single method. Therefore, the maximum relevance and minimum redundancy (mRMR) feature selection algorithm and parallelized mRMR method increase the accuracy if the smaller subset of genes selection for cancer classification in this research [28], [29].…”
Section: Discussionmentioning
confidence: 99%
“…The parallelized mRMR method helps in shortens the computation time compared to the conventional single method. Therefore, the maximum relevance and minimum redundancy (mRMR) feature selection algorithm and parallelized mRMR method increase the accuracy if the smaller subset of genes selection for cancer classification in this research [28], [29].…”
Section: Discussionmentioning
confidence: 99%
“…Wrapper methods search for features according to a specific learning algorithm. Finally, embedded methods [11] selects features in the learning process [2].…”
Section: Related Work (Mrmr Feature Selection)mentioning
confidence: 99%
“…In feature selection algorithms based on information, the main goal is to find a feature set S that has the greatest dependence on the target class z [2]. Using the maximal-dependency criterion, the mutual information relationship is defined as follows:…”
Section: Max-relevance and Min-redundancymentioning
confidence: 99%
See 1 more Smart Citation
“…Multivariate methods pose greater complexity than univariate ones in that, besides requiring a method to evaluate groups of features, they also involve a search mechanism in the space of all possible feature subsets. Regarding the first issue, many works employ correlationoriented criteria based on the concept of mutual information (MI) (see, e.g., [12], [13], [8], [14], [15], [16]). Indeed, the MI between the features and the output reveals their discriminating capabilities, whereas the correlation among features indicates possible redundancy issues (see, e.g., the Minimal Redundancy Maximal Relevance (MRMR) algorithm [14], and the Correlation Based Filtering (CFS) method [17]).…”
Section: Introductionmentioning
confidence: 99%