2019
DOI: 10.1371/journal.pone.0225759
|View full text |Cite
|
Sign up to set email alerts
|

Neuroimaging modality fusion in Alzheimer’s classification using convolutional neural networks

Abstract: Automated methods for Alzheimer’s disease (AD) classification have the potential for great clinical benefits and may provide insight for combating the disease. Machine learning, and more specifically deep neural networks, have been shown to have great efficacy in this domain. These algorithms often use neurological imaging data such as MRI and FDG PET, but a comprehensive and balanced comparison of the MRI and amyloid PET modalities has not been performed. In order to accurately determine the relative strength… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
30
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 47 publications
(31 citation statements)
references
References 43 publications
1
30
0
Order By: Relevance
“…As a result, recent trends in AD diagnosis include the use of DL-based approaches. DL-based [7,11,12,[19][20][21] studies consider multimodal information for classifying AD and mAD from NC. The studies [7,22,23] use 3D patches from the whole brain to train and test a CNN model.…”
Section: Introductionmentioning
confidence: 99%
“…As a result, recent trends in AD diagnosis include the use of DL-based approaches. DL-based [7,11,12,[19][20][21] studies consider multimodal information for classifying AD and mAD from NC. The studies [7,22,23] use 3D patches from the whole brain to train and test a CNN model.…”
Section: Introductionmentioning
confidence: 99%
“…The overwhelming majority of these studies have been focusing on complex and high dimension brain imaging data, especially PET and structural MRI (Jo et al, 2019 ; Ebrahimighahnavieh et al, 2020 ; Gautam and Sharma, 2020 ; Haq et al, 2020 ). Several recent studies have aimed to integrate multimodal imaging to improve classification performance (Suk et al, 2014 ; Lu et al, 2018 ; Huang et al, 2019 ; Punjabi et al, 2019 ; Zhou et al, 2019 ). Deep learning can also help to identify features that are important for disease progression or serve as markers for clinical trials (Ithapu et al, 2015 ).…”
Section: Introductionmentioning
confidence: 99%
“…A variational autoencoder was used by Choi et al [150] to detect anomalies in PET images, thereby providing a score of abnormality used to identify AD patients. The main imaging modalities used for AD classification are T1w MRI and 18 F-fluorodeoxyglucose PET [147], but others, for example amyloid PET, have been used [151]. Other types of data, such as speech data [152], also bring meaningful information.…”
Section: Disease Recognitionmentioning
confidence: 99%
“…More recent studies used a single neural network that can deal with multimodal data. Punjabi et al [151] combined MRI and PET data for diagnosing Alzheimer's disease and showed that using both modalities increases the diagnostic accuracy. Combining histopathological images, genomic data and clinical data can also improve survival prediction in cancers [196,197].…”
Section: Integration Of Multimodal Datamentioning
confidence: 99%
See 1 more Smart Citation