2017
DOI: 10.2174/1386207320666170126114051
|View full text |Cite
|
Sign up to set email alerts
|

A Novel Gene Selection Method Based on Sparse Representation and Max-Relevance and Min-Redundancy

Abstract: The effectiveness and stability of our method have been proven through various experiments, which means that our method has practical significance.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…Then, the least absolute shrinkage and selection operator (LASSO) algorithm was employed to identify the most predictive features according to their associations with the EGFR mutation and subtypes with 5-fold cross-validation using the glmnet package in R language v3.6 (available from URL: https://www.r-project.org ). The extracted features were further picked using the Max-Relevance and Min-Redundancy (mRMR) ( 29 ). Finally, the remained features were treated with the logistic regression ( 30 ) with Akaike Information Criterion (AIC) as the stopping rule.…”
Section: Methodsmentioning
confidence: 99%
“…Then, the least absolute shrinkage and selection operator (LASSO) algorithm was employed to identify the most predictive features according to their associations with the EGFR mutation and subtypes with 5-fold cross-validation using the glmnet package in R language v3.6 (available from URL: https://www.r-project.org ). The extracted features were further picked using the Max-Relevance and Min-Redundancy (mRMR) ( 29 ). Finally, the remained features were treated with the logistic regression ( 30 ) with Akaike Information Criterion (AIC) as the stopping rule.…”
Section: Methodsmentioning
confidence: 99%
“…Typical linear feature extraction algorithms include sparse principal component analysis (PCA) ( Min et al, 2018 ; Islam et al, 2020 ), independent component analysis ( Moysés et al, 2017 ), and LDA. Nonlinear transformation methods primarily include neural networks, kernel methods ( Qi et al, 2021b ), manifold learning ( Shen et al, 2017 ), sparse representation ( Min et al, 2017 ), and matrix factorization methods ( Wang et al, 2017 ; Yang et al, 2017 ; Yang and Hu, 2017 ; McCall et al, 2019 ). With the continuous development of machine learning and data mining, new feature extraction methods continue to arise.…”
Section: Application Of Sparse Representation In Bioinformaticsmentioning
confidence: 99%
“…Machine learning methods have also entered the eld of bioinformatics research. [40][41][42] Support vector machines (SVMs) were used by Jiang et al, 43 Xu et al, 44 Zeng et al 45 and Wang et al, 46 a logistic model tree was used by Wang et al, 47 and a decision tree was used by Zhao et al; 48 these are excellent classication tools with global optimality and better generalization abilities to predict potential disease-related candidate miRNAs, but such methods require known negative sample information related to disease-related miRNAs that is difficult to obtain. In order to solve the problem of negative sample acquisition, Chen et al 49 used a regularized least squares approach to optimize similarity networks of miRNAs and diseases, respectively, and the nal miRNA-disease associations were linear weightings of miRNA similarity scores and disease similarity scores.…”
Section: Introductionmentioning
confidence: 99%