2022
DOI: 10.1007/s12559-022-10033-3
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Deep Neural Networks for Classification of Alzheimer’s Disease and Mild Cognitive Impairment from Scalp EEG Recordings

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 49 publications
(22 citation statements)
references
References 94 publications
1
15
0
Order By: Relevance
“…Early diagnosis and differential diagnosis of MCI have been achieved by two different deep learning architectures, including modified convolutional and convolutional autoencoder neural networks. The outcomes demonstrate a 10% increase in accuracy rate over similar research techniques, indicating that the deep learning is a promising technique for processing EEG signals ( Fouladi et al, 2022 ).…”
Section: Discussionmentioning
confidence: 94%
“…Early diagnosis and differential diagnosis of MCI have been achieved by two different deep learning architectures, including modified convolutional and convolutional autoencoder neural networks. The outcomes demonstrate a 10% increase in accuracy rate over similar research techniques, indicating that the deep learning is a promising technique for processing EEG signals ( Fouladi et al, 2022 ).…”
Section: Discussionmentioning
confidence: 94%
“…Various standard features such as spectral analysis, PSD, Event-related potentials (ERPs), powers in different frequency bands, and alpha band relationships were extracted during the tasks by Hu and Zhang [59]. The early detection of Alzheimer disease (AD) by mild cognitive impairment (MCI) has been proposed by Fouladi et al [60]. The authors proposed two different deep learning architectures for the classification of subjects into AD, MCI, and healthy control using 19-channel scalp EEG data.…”
Section: Discussionmentioning
confidence: 99%
“…For accurate classification, the output of the encoder layer was trained by two continuous convolution layers with 64-filter length, a 2*2-kernel function, and a 2*2-maxpooling layer with step 2. Batch-normalization and 0.1 dropout layers were also used to prevent overfitting [33].…”
Section: Convolutional Auto-encoder Neural Networkmentioning
confidence: 99%