Medical Imaging 2020: Computer-Aided Diagnosis 2020
DOI: 10.1117/12.2549483
|View full text |Cite
|
Sign up to set email alerts
|

Multi-modal deep learning for predicting progression of Alzheimer's disease using bi-linear shake fusion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…For the case of using only brain images as input, our model ( M2 in Supplementary Table 2 ), based on only T1-weighted MRI images (accuracy: 78%, AUC: 85%, sensitivity: 78%, specificity: 78%), performed better than a previous deep neural network (DNN) 8 (accuracy: 75%, AUC: not available [NA], sensitivity: 75%, specificity: 75%), equivalent to a DNN 9 that used a much easier task definition (accuracy: 79%, AUC: NA, sensitivity: 75%, specificity: 82%), but was inferior to a DNN 7 using MRI and PET images (accuracy: 83%, AUC: NA, sensitivity: 80%, specificity: 84%) and a DNN 10 using mixed groups of cognitively normal (CN) + sMCI and pSMI+AD groups for training (accuracy: 83%, AUC: 88%, sensitivity: 76%, specificity: 87%). For the case of using not only images, but also non-image information, the performance of our model ( M5 in Supplementary Table 2 ) (accuracy: 88%, AUC: 95%, sensitivity: 88%, specificity: 88%) was better than that of state-of-the-art models using SVM 13 , DNN 16 , and random forest 17 (accuracy: 85%–87%, AUC: 87%–90%). Considering that the state-of-the-art methods were evaluated only by validation datasets (not test datasets), the potential superiority of our model should be greater (see Supplementary Table 6 ).…”
Section: Resultsmentioning
confidence: 84%
See 1 more Smart Citation
“…For the case of using only brain images as input, our model ( M2 in Supplementary Table 2 ), based on only T1-weighted MRI images (accuracy: 78%, AUC: 85%, sensitivity: 78%, specificity: 78%), performed better than a previous deep neural network (DNN) 8 (accuracy: 75%, AUC: not available [NA], sensitivity: 75%, specificity: 75%), equivalent to a DNN 9 that used a much easier task definition (accuracy: 79%, AUC: NA, sensitivity: 75%, specificity: 82%), but was inferior to a DNN 7 using MRI and PET images (accuracy: 83%, AUC: NA, sensitivity: 80%, specificity: 84%) and a DNN 10 using mixed groups of cognitively normal (CN) + sMCI and pSMI+AD groups for training (accuracy: 83%, AUC: 88%, sensitivity: 76%, specificity: 87%). For the case of using not only images, but also non-image information, the performance of our model ( M5 in Supplementary Table 2 ) (accuracy: 88%, AUC: 95%, sensitivity: 88%, specificity: 88%) was better than that of state-of-the-art models using SVM 13 , DNN 16 , and random forest 17 (accuracy: 85%–87%, AUC: 87%–90%). Considering that the state-of-the-art methods were evaluated only by validation datasets (not test datasets), the potential superiority of our model should be greater (see Supplementary Table 6 ).…”
Section: Resultsmentioning
confidence: 84%
“…5 . For the model that uses both image and non-image information, bilinear fusion 16 was used to combine the image features extracted from images and non-image features extracted from non-image information. The same AE as shown in Supplementary Fig.…”
Section: Methodsmentioning
confidence: 99%