2014
DOI: 10.1007/978-3-319-10404-1_81
|View full text |Cite
|
Sign up to set email alerts
|

Multi-frame Super-resolution with Quality Self-assessment for Retinal Fundus Videos

Abstract: This paper proposes a novel super-resolution framework to reconstruct high-resolution fundus images from multiple low-resolution video frames in retinal fundus imaging. Natural eye movements during an examination are used as a cue for super-resolution in a robust maximum a-posteriori scheme. In order to compensate heterogeneous illumination on the fundus, we integrate retrospective illumination correction for photometric registration to the underlying imaging model. Our method utilizes quality self-assessment … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
12
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 13 publications
(12 citation statements)
references
References 10 publications
0
12
0
Order By: Relevance
“…Retinal multiframe acquisition such as fundus videography can exploit the redundant information across the consecutive frames and improve the image degradation model over single-frame acquisition. 68 , 69 Köhler and colleagues 70 demonstrated how a multiframe super-resolution framework can be used to reconstruct a single high-resolution image from sequential low-resolution video frames. Stankiewicz and colleagues 71 implemented a similar framework for reconstructing super-resolved volumetric OCT stacks from several low-quality volumetric OCT scans.…”
Section: Embedded Ophthalmic Devicesmentioning
confidence: 99%
See 1 more Smart Citation
“…Retinal multiframe acquisition such as fundus videography can exploit the redundant information across the consecutive frames and improve the image degradation model over single-frame acquisition. 68 , 69 Köhler and colleagues 70 demonstrated how a multiframe super-resolution framework can be used to reconstruct a single high-resolution image from sequential low-resolution video frames. Stankiewicz and colleagues 71 implemented a similar framework for reconstructing super-resolved volumetric OCT stacks from several low-quality volumetric OCT scans.…”
Section: Embedded Ophthalmic Devicesmentioning
confidence: 99%
“… 100 OCT modalities requiring phase information, such as motion measurement, can benefit from higher bit depths. 101 Even in simple fundus photography, the boundaries between optic disc and cup can sometimes be hard to delineate in some cases due to overexposed optic disc compared with surrounding tissue, illustrated by Köhler and colleagues 70 in their multiframe reconstruction pipeline. Recent feasibility study by Ittarat and colleagues 102 showed that HDR acquisition with tone mapping 100 of fundus images, visualized on standard displays, increased the sensitivity but reduced specificity for glaucoma detection in glaucoma experts.…”
Section: Embedded Ophthalmic Devicesmentioning
confidence: 99%
“…S UPER-RESOLUTION (SR) [1] enhances the spatial resolution of digital images without modifying camera hardware. This facilitates low-cost high-resolution (HR) imagery to improve vision tasks, e. g. in surveillance [2], remote sensing [3], 3D imaging [4], or healthcare [5], [6]. Single-image SR (SISR) infers HR details from a low-resolution (LR) image using self-similarities [7], [8] or example data via classical regression [9], [10], [11], [12] or deep learning [13], [14], [15], [16], [17].…”
Section: Introductionmentioning
confidence: 99%
“…This method's disadvantage is that it is limited to global motion, hence it only works for planar shifts and rotations [11], [12], so frequency-domain methods are ineffective with multiframe image SR. In spatial-domain methods, an HR image with an improved SNR is reconstructed from multiple lowresolution (LR) frames by exploiting sub-pixel motion in an image sequence [13], [14]. Spatial-domain methods are commonly used in retinal image SR tasks.…”
Section: Introductionmentioning
confidence: 99%