2016
DOI: 10.4238/gmr.15028798
|View full text |Cite
|
Sign up to set email alerts
|

Applying the Fisher score to identify Alzheimer's disease-related genes

Abstract: ABSTRACT. Biologists and scientists can use the data from Alzheimer's disease (AD) gene expression microarrays to mine AD disease-related genes. Because of disadvantages such as small sample sizes, high dimensionality, and a high level of noise, it is difficult to obtain accurate and meaningful biological information from gene expression profiles. In this paper, we present a novel approach for utilizing AD microarray data to identify the morbigenous genes. The Fisher score, a classical feature selection method… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
10

Relationship

0
10

Authors

Journals

citations
Cited by 30 publications
(9 citation statements)
references
References 17 publications
0
9
0
Order By: Relevance
“…Mutual information–based filter methods use mutual information to evaluate the relevance of features to class labels and the redundancy of candidate features. However, they suffer from the problem that the objective function only uses a single statistic measure of a dataset (eg, standard deviation, information gain [42,47], or Fisher score [48]), while ignoring the fusion of multiple measures. For example, a standard deviation–based filter model relies on the distance between feature value and mean value for feature selection.…”
Section: Introductionmentioning
confidence: 99%
“…Mutual information–based filter methods use mutual information to evaluate the relevance of features to class labels and the redundancy of candidate features. However, they suffer from the problem that the objective function only uses a single statistic measure of a dataset (eg, standard deviation, information gain [42,47], or Fisher score [48]), while ignoring the fusion of multiple measures. For example, a standard deviation–based filter model relies on the distance between feature value and mean value for feature selection.…”
Section: Introductionmentioning
confidence: 99%
“…To evaluate the performance of FSACE in terms of classification accuracy, FSACE algorithm is compared with several state-of-the-art feature selection algorithms, including EGGS ( Chen et al, 2017 ), EGGS-FS ( Yang et al, 2016 ), MEAR ( Xu et al, 2009 ), Fisher ( Saqlain et al, 2019 ), and Lasso ( Tibshirani, 1996 ). According to the change trend of Fisher scores of six gene datasets, we select the top-200 genes as the reduction set for Fisher algorithm.…”
Section: Experimental Results and Analysismentioning
confidence: 99%
“…This part of the experiment compares the BONJE algorithm with four other advanced feature selection algorithms in the low-dimensional data set from the perspective of the number of selected features and the classification accuracy of KNN and SVM classifiers. The four advanced feature selection algorithms are: (1) Classic Rough Set Algorithm (RS) [1], (2) Neighborhood Rough Set Algorithm (NRS) [40], (3) Covering Decision Algorithm (CDA) [41], (4) Maximum Decision Neighborhood Rough Set Algorithm (MDNRS) [35]. show the experimental results of five different feature selection algorithms.…”
Section: The Performance Of Bonje Algorithm On Low-dimensional Data Setsmentioning
confidence: 99%