Retinal image registration is important to assist diagnosis and monitor retinal diseases, such as diabetic retinopathy and glaucoma. However, registering retinal images for various registration applications requires the detection and distribution of feature points on the low-quality region that consists of vessels of varying contrast and sizes. A recent feature detector known as Saddle detects feature points on vessels that are poorly distributed and densely positioned on strong contrast vessels. Therefore, we propose a multiresolution difference of Gaussian pyramid with Saddle detector (D-Saddle) to detect feature points on the low-quality region that consists of vessels with varying contrast and sizes. D-Saddle is tested on Fundus Image Registration (FIRE) Dataset that consists of 134 retinal image pairs. Experimental results show that D-Saddle successfully registered 43% of retinal image pairs with average registration accuracy of 2.329 pixels while a lower success rate is observed in other four state-of-the-art retinal image registration methods GDB-ICP (28%), Harris-PIIFD (4%), H-M (16%), and Saddle (16%). Furthermore, the registration accuracy of D-Saddle has the weakest correlation (Spearman) with the intensity uniformity metric among all methods. Finally, the paired t-test shows that D-Saddle significantly improved the overall registration accuracy of the original Saddle.
Feature-based retinal fundus image registration (RIR) technique aligns fundus images according to geometrical transformations estimated between feature point correspondences. To ensure accurate registration, the feature points extracted must be from the retinal vessels and throughout the image. However, noises in the fundus image may resemble retinal vessels in local patches. Therefore, this paper introduces a feature extraction method based on a local feature of retinal vessels (CURVE) that incorporates retinal vessels and noises characteristics to accurately extract feature points on retinal vessels and throughout the fundus image. The CURVE performance is tested on CHASE, DRIVE, HRF and STARE datasets and compared with six feature extraction methods used in the existing feature-based RIR techniques. From the experiment, the feature extraction accuracy of CURVE (86.021%) significantly outperformed the existing feature extraction methods (p ≤ 0.001*). Then, CURVE is paired with a scale-invariant feature transform (SIFT) descriptor to test its registration capability on the fundus image registration (FIRE) dataset. Overall, CURVE-SIFT successfully registered 44.030% of the image pairs while the existing feature-based RIR techniques (GDB-ICP, Harris-PIIFD, Ghassabi’s-SIFT, H-M 16, H-M 17 and D-Saddle-HOG) only registered less than 27.612% of the image pairs. The one-way ANOVA analysis showed that CURVE-SIFT significantly outperformed GDB-ICP (p = 0.007*), Harris-PIIFD, Ghassabi’s-SIFT, H-M 16, H-M 17 and D-Saddle-HOG (p ≤ 0.001*).
A feature‐based retinal image registration (RIR) technique aligns multiple fundus images and composed of pre‐processing, feature point extraction, feature descriptor, matching and geometrical transformation. Challenges in RIR include difference in scaling, intensity and rotation between images. The scale and intensity differences can be minimised with consistent imaging setup and image enhancement during the pre‐processing, respectively. The rotation can be addressed with feature descriptor method that robust to varying rotation. Therefore, a feature descriptor method is proposed based on statistical properties (FiSP) to describe the circular region surrounding the feature point. From the experiments on public Fundus Image Registration dataset, FiSP established 99.227% average correct matches for rotations between 0° and 180°. Then, FiSP is paired with Harris corner, scale‐invariant feature transform (SIFT), speeded‐up robust feature (SURF), Ghassabi's and D‐Saddle feature point extraction methods to assess its registration performance and compare with the existing feature‐based RIR techniques, namely generalised dual‐bootstrap iterative closet point (GDB‐ICP), Harris‐partial intensity invariant feature descriptor (PIIFD), Ghassabi's–SIFT, H‐M 16, H‐M 17 and D‐Saddle–histogram of oriented gradients (HOG). The combination of SIFT–FiSP registered 64.179% of the image pairs and significantly outperformed other techniques with mean difference between 25.373 and 60.448% (p = <0.001*).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.