2017
DOI: 10.1007/978-3-319-67561-9_14
|View full text |Cite
|
Sign up to set email alerts
|

Retinal Image Quality Classification Using Fine-Tuned CNN

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 10 publications
(5 citation statements)
references
References 8 publications
0
5
0
Order By: Relevance
“…Blind IQA has become more generalizable with the development of machine learning and does not require additional information beyond the original data [14]. As such, it is commonly used for fundus images classification [15][16][17][18][19][20][21] and has proven to be effective in eliminating low-quality images. Few studies have investigated the use of blind IQA methods based on OCT until 2019.…”
Section: Introductionmentioning
confidence: 99%
“…Blind IQA has become more generalizable with the development of machine learning and does not require additional information beyond the original data [14]. As such, it is commonly used for fundus images classification [15][16][17][18][19][20][21] and has proven to be effective in eliminating low-quality images. Few studies have investigated the use of blind IQA methods based on OCT until 2019.…”
Section: Introductionmentioning
confidence: 99%
“…Sun et al [25] evaluated the various CNN (VGG-16, ResNet-50, AlexNet and GoogLeNet) performances in the classification of retinal images. This model evaluated the two major factors, namely pre-processing and data augmentation.…”
Section: Related Workmentioning
confidence: 99%
“…On a larger dataset (9653 ungradable retinal images and 11347 gradable images), they also evaluated the possibility of using a hybrid method combining saliency maps and CNNs [44]. Finally, [45] compare the performance of fine-tuning four CNN architectures -AlexNet [35], GoogLeNet [46], VGG-16 [47] and ResNet-50 -on a 3000-image subset of the Kaggle database. These preliminary studies report that large networks are hard to train, and must deal with overfitting issues, due to the huge amount of parameters.…”
Section: Related Workmentioning
confidence: 99%