2016
DOI: 10.1109/tmi.2015.2506902
|View full text |Cite
|
Sign up to set email alerts
|

Human Visual System-Based Fundus Image Quality Assessment of Portable Fundus Camera Photographs

Abstract: Telemedicine and the medical "big data" era in ophthalmology highlight the use of non-mydriatic ocular fundus photography, which has given rise to indispensable applications of portable fundus cameras. However, in the case of portable fundus photography, non-mydriatic image quality is more vulnerable to distortions, such as uneven illumination, color distortion, blur, and low contrast. Such distortions are called generic quality distortions. This paper proposes an algorithm capable of selecting images of fair … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
72
0
1

Year Published

2018
2018
2021
2021

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 94 publications
(73 citation statements)
references
References 31 publications
0
72
0
1
Order By: Relevance
“…We can make the following observations: (1) The performance of the non-deep learning baseline [16] is obviously lower than those of deep learning based methods. This is reasonable because deep learning can extract highly discriminative representations from the retinal images directly, using multiple CNN layers, which are superior to the hand-crafted features in [16] and lead to better performance. (2) For the different color-spaces, the networks in RGB and LAB color-spaces perform better than that in HSV color-space.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…We can make the following observations: (1) The performance of the non-deep learning baseline [16] is obviously lower than those of deep learning based methods. This is reasonable because deep learning can extract highly discriminative representations from the retinal images directly, using multiple CNN layers, which are superior to the hand-crafted features in [16] and lead to better performance. (2) For the different color-spaces, the networks in RGB and LAB color-spaces perform better than that in HSV color-space.…”
Section: Resultsmentioning
confidence: 99%
“…We also report the average result (AVG) when combining the predictions of the three color-spaces directly, without the fusion block. For the non-deep learning baseline, we implement the RIQA method from [16], which is based on three visual characteristics (i.e., multi-channel sensation, just noticeable blur, and the contrast sensitivity function) and an SVM classifier with a radial based function. For evaluation metrics, we employ average accuracy, precision, recall, and F-measure ( 2 * precision * recall precision+recall ).…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…portable tool in generating satisfactory fundus images for retinal assessment especially during screening [8,11,14,25]. However, in the current study, it has been shown that the retinal image photographed by the Peek Retina was more preferable in tracing retinal vascular network for further analysis.…”
Section: Peek 3dpomentioning
confidence: 56%
“…In the next work under this category the authors supported the importance of retinal IQA research with the fact that the portable and handy fundus imaging devices are more sensitive towards distortions. Based on the theory of Human Visual System (HVS) framework, in 2016 S. Wang, K. Jin, H. Lu et al [100] presented a machine learning approach for quality prediction of portable fundus images. Initially, the quality scores are collected by the subjective evaluation from three ophthalmologists for a dataset of 536 images.…”
Section: Feature Extraction Based On Generic Image Statisticsmentioning
confidence: 99%