2021
DOI: 10.1101/2021.05.28.21257318
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Development of a Deep Learning Model for Early Alzheimer’s Disease Detection from Structural MRIs and External Validation on an Independent Cohort

Abstract: Early diagnosis of Alzheimer's disease plays a pivotal role in patient care and clinical trials. In this study, we have developed a new approach based on 3D deep convolutional neural networks to accurately differentiate mild Alzheimer's disease dementia from mild cognitive impairment and cognitively normal individuals using structural MRIs. For comparison, we have built a reference model based on the volumes and thickness of previously reported brain regions that are known to be implicated in disease progressi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 41 publications
0
5
0
Order By: Relevance
“…Eric et al [34] applied a label correction model to distinguish noisy and clean labels. Liu [17] first investigated the early-learning phenomenon to counteract the influence of noisy labels. Huang et al [35] constructed a Gaussian mixture model to handle noisy labels while training twin contrastive learning models for robust representations.…”
Section: Noisy Learningmentioning
confidence: 99%
“…Eric et al [34] applied a label correction model to distinguish noisy and clean labels. Liu [17] first investigated the early-learning phenomenon to counteract the influence of noisy labels. Huang et al [35] constructed a Gaussian mixture model to handle noisy labels while training twin contrastive learning models for robust representations.…”
Section: Noisy Learningmentioning
confidence: 99%
“…However, the identified neighbours or their annotations may be incorrect due to the noisy labels used in training. Moreover, towards the end of training, the network is more prone to overfitting to incorrect examples than in early epochs [11,3]. With the aim of obtaining more robust scores and inspired by the temporal ensembling strategy used in [8,11], we propose to compute a weighted average of the scores obtained for the previous epoch and the current k-NN scores.…”
Section: Proposed Approachmentioning
confidence: 99%
“…Moreover, towards the end of training, the network is more prone to overfitting to incorrect examples than in early epochs [11,3]. With the aim of obtaining more robust scores and inspired by the temporal ensembling strategy used in [8,11], we propose to compute a weighted average of the scores obtained for the previous epoch and the current k-NN scores. Unlike other k-NN sample selection approaches [4,5,6], we compute scores by taking into account the model's evolution during training.…”
Section: Proposed Approachmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, various methods have been proposed to relax this strong clean subset assumption by taking advantage of the characteristic that noisy labels are more difficult-tolearn than clean samples. For example, such difficult-to-learn samples are guided by regularizers (Liu et al 2020;Cao et al 2019Cao et al , 2020, giving lower weights (Wang et al 2019;Zhang and Sabuncu 2018), cleansing out (Mirzasoleiman, Cao, and Leskovec 2020;Wu et al 2020;Pleiss et al 2020;Han et al 2018;Yu et al 2019), or utilizing a semi-supervised learning algorithm by considering them as unlabeled samples Li, Socher, and Hoi 2019).…”
Section: Introductionmentioning
confidence: 99%