2021
DOI: 10.1109/tmi.2020.3023254
|View full text |Cite
|
Sign up to set email alerts
|

End-to-End Fovea Localisation in Colour Fundus Images With a Hierarchical Deep Regression Network

Abstract: Accurately locating the fovea is a prerequisite for developing computer aided diagnosis (CAD) of retinal diseases. In colour fundus images of the retina, the fovea is a fuzzy region lacking prominent visual features and this makes it difficult to directly locate the fovea. While traditional methods rely on explicitly extracting image features from the surrounding structures such as the optic disc and various vessels to infer the position of the fovea, deep learning based regression technique can implicitly mod… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
18
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 24 publications
(19 citation statements)
references
References 41 publications
1
18
0
Order By: Relevance
“…From Table II, the Bilateral-ViT achieves state-of-the-art performance for all the evaluation settings. In particular, on the Messidor dataset, at 1/8R, our network reaches the best accuracy of 85.65% with a gain of 1.84% compared to the second-best score (83.81%) [13]. It also reaches an accuracy of 100% at evaluation thresholds of 1/2R, 1R, and 2R; in other words, the localization errors are at most 1/2R (approximately 19 pixels for an input image size of 512 × 512).…”
Section: B Comparison With State-of-the-art Methodsmentioning
confidence: 87%
See 2 more Smart Citations
“…From Table II, the Bilateral-ViT achieves state-of-the-art performance for all the evaluation settings. In particular, on the Messidor dataset, at 1/8R, our network reaches the best accuracy of 85.65% with a gain of 1.84% compared to the second-best score (83.81%) [13]. It also reaches an accuracy of 100% at evaluation thresholds of 1/2R, 1R, and 2R; in other words, the localization errors are at most 1/2R (approximately 19 pixels for an input image size of 512 × 512).…”
Section: B Comparison With State-of-the-art Methodsmentioning
confidence: 87%
“…It consists of 400 images annotated with fovea locations, in which 213 images are pathologic myopia, and the remaining 187 images are normal retinas. For the fairness of comparisons, we keep our data split identical to [13].…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Researchers all over the world aim to make robots achieve intelligent, human-like grasp capabilities, which will expand the application field of robots and create huge economic and social value. With the rapid development of deep learning and the improvement of camera sensor hardware, research on object recognition and location based on machine vision has made great progress [1], but there is less research on object grasp point detection, and it is mainly focused on rectangular grasp strategy [2][3][4][5][6]. Ian Lenz et al [3] proposed a two-layer cascaded deep learning network to predict grasping strategy: The first deep network was used to quickly exclude the impossible grasp options; the second filtered the grasp strategy based on the first network and output the optimal value.…”
Section: Introductionmentioning
confidence: 99%
“…An accurate localization of the fovea, an important anatomical landmark in the retina, can be beneficial to the computer aided diagnosis of retinal diseases. Huang et al [1] take advantage of the geometrical relationship between optic disc and fovea to achieve a more accurate localization and Xie et al [2] utilize a three-stage network with coarse-fine fusion. MSE loss is used in both approaches.…”
Section: Introductionmentioning
confidence: 99%