The automatic exudate segmentation in colour retinal fundus images is an important task in computer aided diagnosis and screening systems for diabetic retinopathy. In this paper, we present a location-to-segmentation strategy for automatic exudate segmentation in colour retinal fundus images, which includes three stages: anatomic structure removal, exudate location and exudate segmentation. In anatomic structure removal stage, matched filters based main vessels segmentation method and a saliency based optic disk segmentation method are proposed. The main vessel and optic disk are then removed to eliminate the adverse affects that they bring to the second stage. In the location stage, we learn a random forest classifier to classify patches into two classes: exudate patches and exudate-free patches, in which the histograms of completed local binary patterns are extracted to describe the texture structures of the patches. Finally, the local variance, the size prior about the exudate regions and the local contrast prior are used to segment the exudate regions out from patches which are classified as exudate patches in the location stage. We evaluate our method both at exudate-level and image-level. For exudate-level evaluation, we test our method on e-ophtha EX dataset, which provides pixel level annotation from the specialists. The experimental results show that our method achieves 76% in sensitivity and 75% in positive prediction value (PPV), which both outperform the state of the art methods significantly. For image-level evaluation, we test our method on DiaRetDB1, and achieve competitive performance compared to the state of the art methods.
Abstract. In this paper, we propose two vesselness maps and a simple to difficult learning framework for retinal vessel segmentation which is ground truth free. The first vesselness map is the multiscale centrelineboundary contrast map which is inspired by the appearance of vessels. The other is the difference of diffusion map which measures the difference of the diffused image and the original one. Meanwhile, two existing vesselness maps are generated. Totally, 4 vesselness maps are generated. In each vesselness map, pixels with large vesselness values are regarded as positive samples. Pixels around the positive samples with small vesselness values are regarded as negative samples. Then we learn a strong classifier for the retinal image based on other 3 vesselness maps to determine the pixels with mediocre values in single vesselness map. Finally, pixels with two classifier supports are labelled as vessel pixels. The experimental results on DRIVE and STARE show that our method outperforms the state-of-the-art unsupervised methods and achieves competitive performances to supervised methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.