2022
DOI: 10.3389/fphys.2022.918929
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Model Domain Adaptation for Diabetic Retinopathy Classification

Abstract: Diabetic retinopathy (DR) is one of the most threatening complications in diabetic patients, leading to permanent blindness without timely treatment. However, DR screening is not only a time-consuming task that requires experienced ophthalmologists but also easy to produce misdiagnosis. In recent years, deep learning techniques based on convolutional neural networks have attracted increasing research attention in medical image analysis, especially for DR diagnosis. However, dataset labeling is expensive work a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 36 publications
0
4
0
Order By: Relevance
“…The comparative results are presented in Table 9 . Zhang et al [ 43 ] proposed a multi-model domain adaptation (MMDA) method, and they trained it on source domains including DDR, IDRiD, Messidor, and Messidor-2 datasets and tested it on the target domain APTOS. Their method achieved high sensitivity but had an accuracy of 90.6%, which is lower than that of VMLRI (93.42%).…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…The comparative results are presented in Table 9 . Zhang et al [ 43 ] proposed a multi-model domain adaptation (MMDA) method, and they trained it on source domains including DDR, IDRiD, Messidor, and Messidor-2 datasets and tested it on the target domain APTOS. Their method achieved high sensitivity but had an accuracy of 90.6%, which is lower than that of VMLRI (93.42%).…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…Although the Messidor-2 dataset has been used by several groups, there are different allocations of referable DR images 20 , 21 and usually used for external validation, not allowing direct comparison with previous traditional machine learning studies 22 , 23 .…”
Section: Discussionmentioning
confidence: 99%
“…Because there are too few samples for training, overftting is easy; i.e., the model performs better on the training samples, but the generalization efect on the test set is unsatisfactory. To alleviate this phenomenon, we use diferent methods to enhance the dataset, primarily including random horizontal and vertical fipping and arbitrary direction rotation [24]. During training, the loss function takes cross-entropy loss, using AdamW as the optimizer, and weight decay is set to 0.05; we use a transfer-learning approach that uses ImageNet-based pretrained weights.…”
Section: Model Architecture In This Study We Use the Swinmentioning
confidence: 99%