2021
DOI: 10.1016/j.clinph.2020.09.015
|View full text |Cite
|
Sign up to set email alerts
|

Stacked autoencoders as new models for an accurate Alzheimer’s disease classification support using resting-state EEG and MRI measurements

Abstract: Artificial neural networks with stacked autoencoders detected Alzheimer's dementia patients based on EEG and structural MRI variables. Classification accuracies over control participants reached 80% (EEG), 85% (MRI), and 89% (both). These results motivate future multi-centric, harmonized prospective and longitudinal crossvalidation studies. a b s t r a c tObjective: This retrospective and exploratory study tested the accuracy of artificial neural networks (ANNs) at detecting Alzheimer's disease patients with d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
16
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 45 publications
(16 citation statements)
references
References 78 publications
0
16
0
Order By: Relevance
“…Numerous ML methods have been used to classify and predict AD stages with promising results (Haller, Lövblad et al 2011, Falahati, Westman et al 2014, Rathore, Habes et al 2017. While some studies have made use of a single screening modality, such as MRI (Fan, Batmanghelich et al 2008, Kloppel, Stonnington et al 2008, Cuingnet, Gerardin et al 2011, Liu, Zhang et al 2012, Tong, Wolz et al 2014, or electroencephalography (EEG) (Blinowska, Rakowski et al 2017, Farina, Emek-Savaş et al 2020, Ferri, Babiloni et al 2020, Oltu, Akşahin et al 2021, others have used a combination of multiple imaging techniques including MRI, PET, and cerebrospinal fluid (CSF) biomarkers (Zhang, Wang et al 2011, Gray, Aljabar et al 2013, Jie, Zhang et al 2013, Young, Modat et al 2013, Teipel, Kurth et al 2015, Yun, Kwak et al 2015, Samper-González, Burgos et al 2018. Although many of those studies presented interesting and promising results in AD classification, most focused on a so-called two-class problem.…”
Section: Introductionmentioning
confidence: 99%
“…Numerous ML methods have been used to classify and predict AD stages with promising results (Haller, Lövblad et al 2011, Falahati, Westman et al 2014, Rathore, Habes et al 2017. While some studies have made use of a single screening modality, such as MRI (Fan, Batmanghelich et al 2008, Kloppel, Stonnington et al 2008, Cuingnet, Gerardin et al 2011, Liu, Zhang et al 2012, Tong, Wolz et al 2014, or electroencephalography (EEG) (Blinowska, Rakowski et al 2017, Farina, Emek-Savaş et al 2020, Ferri, Babiloni et al 2020, Oltu, Akşahin et al 2021, others have used a combination of multiple imaging techniques including MRI, PET, and cerebrospinal fluid (CSF) biomarkers (Zhang, Wang et al 2011, Gray, Aljabar et al 2013, Jie, Zhang et al 2013, Young, Modat et al 2013, Teipel, Kurth et al 2015, Yun, Kwak et al 2015, Samper-González, Burgos et al 2018. Although many of those studies presented interesting and promising results in AD classification, most focused on a so-called two-class problem.…”
Section: Introductionmentioning
confidence: 99%
“…The proposed approach for 4-way classification achieved accuracies of 98.88, 98.01, and 98.14% using the GoogleNet, ResNet-18, and ResNet-152 pre-trained networks, respectively. All three architectures performed better than other models that are proposed to classify AD MRI data, such as the stacked autoenconder (SAE) models (Gupta et al, 2013;Ferri et al, 2021). Furthermore, Valliani and Soni (2017) proposed a pre-trained deep ResNet to classify AD MRI imaging in order to demonstrate that training on biomedical imaging was not necessary for the task and achieved modest accuracy.…”
Section: Methodologies For Ad Imaging Classification Cnnmentioning
confidence: 98%
“…Therefore, the need for processing large images renders MLPs a non-optimal option. MLPs also require flattened vector inputs for image processing, so spatial information becomes lost (Feng et al, 2019). Finally, MLPs also run the risk of overfitting training data, leading to poor generalizability (Caruana et al, 2001).…”
Section: Background Multilayer Perceptron Neural Networkmentioning
confidence: 99%
“…The autoencoder’s main aim is to reconstruct the inputs such that the difference between the input and the output is minimized. The learning in autoencoder is compressed and distributed (encoding) ( Ferri et al, 2021 ). The training of autoencoder involves three steps:…”
Section: Methodsmentioning
confidence: 99%