Retinal blood vessel morphological abnormalities are generally associated with cardiovascular, cerebrovascular, and systemic diseases, automatic artery/vein (A/V) classification is particularly important for medical image analysis and clinical decision making. However, the current method still has some limitations in A/V classification, especially the blood vessel edge and end error problems caused by the single scale and the blurred boundary of the A/V. To alleviate these problems, in this work, we propose a vessel-constraint network (VC-Net) that utilizes the information of vessel distribution and edge to enhance A/V classification, which is a high-precision A/V classification model based on data fusion. Particularly, the VC-Net introduces a vessel-constraint (VC) module that combines local and global vessel information to generate a weight map to constrain the A/V features, which suppresses the background-prone features and enhances the edge and end features of blood vessels. In addition, the VC-Net employs a multiscale feature (MSF) module to extract blood vessel information with different scales to improve the feature extraction capability and robustness of the model. And the VC-Net can get vessel segmentation results simultaneously. The proposed method is tested on publicly available fundus image datasets with different scales, namely, DRIVE, LES, and HRF, and validated on two newly created multicenter datasets: Tongren and Kailuan. We achieve a balance accuracy of 0.9554 and F1 scores of 0.7616 and 0.7971 for the arteries and veins, respectively, on the DRIVE dataset. The experimental results prove that the proposed model achieves competitive performance in A/V classification and vessel segmentation tasks compared with state-of-the-art methods. Finally, we test the Kailuan dataset with other trained fusion datasets, the results also show good robustness. To promote research in this area, the Tongren dataset and source code will be made publicly available. The dataset and code will be made available at https://github.com/huawang123/VC-Net.
Deep learning methods have been successfully applied in medical image classification, segmentation and detection tasks. The U-Net architecture has been widely applied for these tasks. In this paper, we propose a U-Net variant for improved vessel segmentation in retinal fundus images. Firstly, we design a minimal U-Net (Mi-UNet) architecture, which drastically reduces the parameter count to 0.07M compared to 31.03M for the conventional U-Net. Moreover, based on Mi-UNet, we propose Salient U-Net (S-UNet), a bridge-style U-Net architecture with a saliency mechanism and with only 0.21M parameters. S-UNet uses a cascading technique that employs the foreground features of one net block as the foreground attention information of the next net block. This cascading leads to enhanced input images, inheritance of the learning experience of previous net blocks, and hence effective solution of the data imbalance problem. S-UNet was tested on two benchmark datasets, DRIVE and CHASE_DB1, with image sizes of 584 × 565 and 960 × 999, respectively. S-UNet was tested on the TONGREN clinical dataset with image sizes of 1880 × 2816. The experimental results show superior performance in comparison to other state-of-theart methods. Especially, for whole-image input from the DRIVE dataset, S-UNet achieved a Matthews correlation coefficient (MCC), an area under curve (AUC), and an F1 score of 0.8055, 0.9821, and 0.8303, respectively. The corresponding scores for the CHASE_DB1 dataset were 0.8065, 0.9867, and 0.8242, respectively. Moreover, our model shows an excellent performance on the TONGREN clinical dataset. In addition, S-UNet segments images of low, medium, and high resolutions in just 33ms, 91ms and 0.49s, respectively. This shows the real-time applicability of the proposed model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.