2019
DOI: 10.1016/j.biosystems.2018.12.009
|View full text |Cite
|
Sign up to set email alerts
|

Gene expression cancer classification using modified K-Nearest Neighbors technique

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
73
0
2

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
3
3

Relationship

0
9

Authors

Journals

citations
Cited by 134 publications
(75 citation statements)
references
References 21 publications
0
73
0
2
Order By: Relevance
“…Finally, observed and predicted responses were compared to understand the prediction accuracy of the selected feature genes. For validation, 3 different prediction algorithms ( S upport V ector M achine: SVM [28], r andom F orest: RF and k - N earest N eighbor: k-NN [21]) were used to increase the confidence and robustness of prediction.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Finally, observed and predicted responses were compared to understand the prediction accuracy of the selected feature genes. For validation, 3 different prediction algorithms ( S upport V ector M achine: SVM [28], r andom F orest: RF and k - N earest N eighbor: k-NN [21]) were used to increase the confidence and robustness of prediction.…”
Section: Methodsmentioning
confidence: 99%
“…These down-streams effects (i.e. gene’s up or down-regulations) are easy to assess and can directly be linked to pathways or cellular processes, thereby, giving an answer to how and why a patient could be sensitive to a given therapy [21]. The advantage of such a data-driven approach is that it is free from any pre-conceptualized bias like drug targets, disease genes etc.…”
Section: Introductionmentioning
confidence: 99%
“…There are two objectives for the experiments carried out in this section. The first is to validate the KNNV algorithm by showing its superiority to related algorithms that can handle both heterogeneity and incompleteness in the data, namely Modified KNN (MKNN) [28], KNN for imperfect data (KNN imp ) [29] and cost-sensitive KNN (csKNN) [30]. For each algorithm, the precision, recall, accuracy, and F-Score [31] metrics were evaluated.…”
Section: Experimental Workmentioning
confidence: 99%
“…The k-NN classifies the input data on the basis of the k-nearest neighbor's category. k-NN is used for its simplicity in many research problems, such as in [30], where it is used for the prediction of cancer using gene expression data. In our case, the dataset was continuous (integer/real), thus using k-NN was the best choice for our research, as the paper claims that the performance of k-NN on continuous datasets is better than its performance on the text data [31].…”
Section: Classifier Usedmentioning
confidence: 99%