2019
DOI: 10.3389/fgene.2019.00033
|View full text |Cite
|
Sign up to set email alerts
|

k-Skip-n-Gram-RF: A Random Forest Based Method for Alzheimer's Disease Protein Identification

Abstract: In this paper, a computational method based on machine learning technique for identifying Alzheimer's disease genes is proposed. Compared with most existing machine learning based methods, existing methods predict Alzheimer's disease genes by using structural magnetic resonance imaging (MRI) technique. Most methods have attained acceptable results, but the cost is expensive and time consuming. Thus, we proposed a computational method for identifying Alzheimer disease genes by use of the sequence information of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
37
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 65 publications
(37 citation statements)
references
References 67 publications
0
37
0
Order By: Relevance
“…For the ROC curve, 1-specificity was plotted on the horizontal axis, and sensitivity on the vertical axis. LOO, K-Fold cross-validation, and independent testing are the most widely used methods for predictor evaluation (Mrozek et al, 2015;Cao and Cheng, 2016;Chen et al, 2017Chen et al, , 2018aChen et al, , 2019bPan et al, 2017;He et al, 2018He et al, , 2019Jiang et al, 2018;Xiong et al, 2018;Yu et al, 2018;Zhang et al, 2018;Ding et al, 2019;Feng et al, 2019;Kong and Zhang, 2019;Li and Liu, 2019;Lv et al, 2019a;Manavalan et al, 2019;Shan et al, 2019;Wang et al, 2019a;Wei et al, 2019a,b;Xu et al, 2019;Yu and Dai, 2019). That is the general machine learning evaluation methods (training, validation and testing) are used for optimized model evaluation.…”
Section: Model Evaluation Metrics and Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…For the ROC curve, 1-specificity was plotted on the horizontal axis, and sensitivity on the vertical axis. LOO, K-Fold cross-validation, and independent testing are the most widely used methods for predictor evaluation (Mrozek et al, 2015;Cao and Cheng, 2016;Chen et al, 2017Chen et al, , 2018aChen et al, , 2019bPan et al, 2017;He et al, 2018He et al, , 2019Jiang et al, 2018;Xiong et al, 2018;Yu et al, 2018;Zhang et al, 2018;Ding et al, 2019;Feng et al, 2019;Kong and Zhang, 2019;Li and Liu, 2019;Lv et al, 2019a;Manavalan et al, 2019;Shan et al, 2019;Wang et al, 2019a;Wei et al, 2019a,b;Xu et al, 2019;Yu and Dai, 2019). That is the general machine learning evaluation methods (training, validation and testing) are used for optimized model evaluation.…”
Section: Model Evaluation Metrics and Methodsmentioning
confidence: 99%
“…By combining multiple weak classifiers, the final results can be voted or averaged to obtain an overall model with higher accuracy, better general performance, and resistance to overfitting. This algorithm has been extensively used in bioinformatics and other areas, and has been confirmed to be an effective modeling technique in various domains (Ding et al, 2016a,b;Mrozek et al, 2016;Qiu et al, 2016;Wang et al, 2017;Wei et al, 2017a,b,c;Yu et al, 2017a;Zheng et al, 2017;Tang et al, 2018Tang et al, , 2019aXue et al, 2018;Degenhardt et al, 2019;Xu et al, 2019). In this study, the scikit-learn toolkit, available at https://scikit-learn.org, was used to establish the models.…”
Section: Algorithmmentioning
confidence: 99%
“…where TN, TP, FN, and FP refer to the numbers of correctly predicted non-thermophilic proteins, correctly predicted nonthermophilic proteins, incorrectly predicted non-thermophilic proteins, and incorrectly predicted thermophilic proteins, respectively. SE and SP indicators measure the predictive ability of a model in positive and negative situations, respectively, and ACC is used to evaluate the overall performance of a prediction model (Wang et al, 2008;Zou et al, 2017a,b;Wang G. et al, 2018;Xue et al, 2018;Xu et al, 2018aXu et al, , 2019Ding et al, 2019b;Yang, 2019;Zeng et al, 2019;Fu et al, 2020;Hong et al, 2020).…”
Section: Performance Measurementmentioning
confidence: 99%
“…K-fold cross validation, leave-one-out cross-validation (LOOCV) and independent tests are three major validation methods. In this paper, we use five-fold cross validation to evaluate and compare the different identifiers (Jiang et al, 2013;Ding et al, 2017;Wei et al, 2017aWei et al, ,b,c, 2019Chu et al, 2019;Liu et al, 2019c,d;Shan et al, 2019;Xu et al, 2019c;Zeng et al, 2019a,c;Zhang X. et al, 2019). five-fold cross validation first divides the whole training dataset into five parts.…”
Section: Evaluation Measurementmentioning
confidence: 99%