2020
DOI: 10.1016/j.cmpb.2019.105073
|View full text |Cite
|
Sign up to set email alerts
|

Application of deep canonically correlated sparse autoencoder for the classification of schizophrenia

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
28
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 56 publications
(29 citation statements)
references
References 31 publications
1
28
0
Order By: Relevance
“…Preferably even, the model should be tested on a separate, external dataset as this provides most evidence of model generalization, but this is often not feasible. Still, several studies only report accuracies based on a single train/test split ( Li G. et al, 2018 , Srinivasagopalan et al, 2019 , Kuang et al, 2014 , Kuang and He, 2014 , Li et al, 2019 ), therefore reporting an overly optimistic outcome and complicating comparisons with other studies. As the best practice for model generalizability is to use an independently collected dataset as test set, it is good practice to report leave site out validation as each site is an independent dataset.…”
Section: Discussionmentioning
confidence: 99%
“…Preferably even, the model should be tested on a separate, external dataset as this provides most evidence of model generalization, but this is often not feasible. Still, several studies only report accuracies based on a single train/test split ( Li G. et al, 2018 , Srinivasagopalan et al, 2019 , Kuang et al, 2014 , Kuang and He, 2014 , Li et al, 2019 ), therefore reporting an overly optimistic outcome and complicating comparisons with other studies. As the best practice for model generalizability is to use an independently collected dataset as test set, it is good practice to report leave site out validation as each site is an independent dataset.…”
Section: Discussionmentioning
confidence: 99%
“…Other ML methods [e.g., deep autoencoder (DeepAE), a likelihood-free inference framework] consider a framework in which the information of the input variables is compressed and subsequently reconstruct the input data, minimizing the loss function. In this sense, the deep learning (DL) approach is a class of neural networks and has been an active area of ML research, emerging as a powerful tool in genetics and genomics studies, e.g., schizophrenia classification through datasets of SNP and functional magnetic resonance imaging (Li et al, 2020), gene expression prediction from SNP genotypes (Xie et al, 2017), MADS-box gene classification system for angiosperms (Chen et al, 2019), and RNA secondary structure prediction (Zhang et al, 2019) and to predict quantitative phenotypes from SNPs (Liu et al, 2019). Unlike the traditional artificial neural network, DL algorithms consider many hidden layers during the network training (Xie et al, 2016).…”
Section: Introductionmentioning
confidence: 99%
“…Preferably even, the model should be tested on a separate, external dataset as this provides most evidence of model generalization, but this is often not feasible. Still, several studies only report accuracies based on a single train/test split 37,44,52,53,66 , therefore reporting an overly optimistic outcome and complicating comparisons with other studies. As the best practice for model generalizability is to use an independently collected dataset as test set, it is good practice to report leave site out validation as each site is an independent dataset.…”
Section: Discussionmentioning
confidence: 99%