Search citation statements
Paper Sections
Citation Types
Year Published
Publication Types
Relationship
Authors
Journals
Feature extraction of the retinal blood vessel is one of the crucial tasks in the prediction of ophthalmologic diseases. Important features are extracted based on image segmentation results. The efficiency of vessel segmentation methods could help doctors in the diagnostic of several relevant diseases as early as possible. Recently, U-Net has achieved good results in many medical image segmentation tasks, especially for images of blood vessels. However, due to the limitation of the network structure, some small features could be lost in the transmission process. As a result, there are still many research gaps for U-Net-based retinal vessel segmentation works. In this paper, we propose an improved U-Net based model to segment images of retinal vessels. The improvement focuses on U-Net from two aspects: designing a local feature enhancement module composed of dilated convolution and $$1\times 1$$ 1 × 1 convolution to enhance the feature extraction of tiny vessels; integrating an attention mechanism with skip connection of the network to highlight features related to vessel segmentation information taken from the down-sampling part to the up-sampling part. The performance of the proposed model was evaluated and compared with several published state-of-the-art approaches on the same public dataset—DRIVE, and the proposed method achieved an accuracy of 0.9563, F1-score of 0.823, TPR of 0.7983, and TNR of 0.9793. The AUC of PRC is 0.9109 and the AUC of ROC is 0.9794. The results proved the potential for clinical applications.
Feature extraction of the retinal blood vessel is one of the crucial tasks in the prediction of ophthalmologic diseases. Important features are extracted based on image segmentation results. The efficiency of vessel segmentation methods could help doctors in the diagnostic of several relevant diseases as early as possible. Recently, U-Net has achieved good results in many medical image segmentation tasks, especially for images of blood vessels. However, due to the limitation of the network structure, some small features could be lost in the transmission process. As a result, there are still many research gaps for U-Net-based retinal vessel segmentation works. In this paper, we propose an improved U-Net based model to segment images of retinal vessels. The improvement focuses on U-Net from two aspects: designing a local feature enhancement module composed of dilated convolution and $$1\times 1$$ 1 × 1 convolution to enhance the feature extraction of tiny vessels; integrating an attention mechanism with skip connection of the network to highlight features related to vessel segmentation information taken from the down-sampling part to the up-sampling part. The performance of the proposed model was evaluated and compared with several published state-of-the-art approaches on the same public dataset—DRIVE, and the proposed method achieved an accuracy of 0.9563, F1-score of 0.823, TPR of 0.7983, and TNR of 0.9793. The AUC of PRC is 0.9109 and the AUC of ROC is 0.9794. The results proved the potential for clinical applications.
Specular highlight removal ensures the acquisition of high-quality images, which finds its important applications in stereo matching, text recognition and image segmentation. In order to prevent the leakage of images containing personal information, such as identification card (ID) photos, clients often train specular highlight removal models using local data resulting in a lack of precision and generalization of the trained model. To address this challenge, this paper introduces a new method to remove highlight in images using federated learning (FL) and attention generative adversarial network (AttGAN). Specifically, the former builds a global model in the central server and updates the global model by aggregating model parameters of clients. This process does not involve the transmission of image data, which enhances the privacy of clients; the later combining attention mechanisms and generative adversarial network aims to improve the quality of highlight removal by focusing on key image regions, resulting in more realistic and visually pleasing results. The proposed FL-AttGAN method is numerically evaluated, using SD1, SD2 amd RD datasets. The results show that the proposed FL-AttGAN outperforms existent methods.
Computer assisted diagnostic technology has been widely used in clinical practice, specifically focusing on medical image segmentation. Its purpose is to segment targets with certain special meanings in medical images and extract relevant features, providing reliable basis for subsequent clinical diagnosis and research. However, because of different shapes and complex structures of segmentation targets in different medical images, some imaging techniques have similar characteristics, such as intensity, color, or texture, for imaging different organs and tissues. The localization and segmentation of targets in medical images remains an urgent technical challenge to be solved. As such, an improved full scale skip connection network structure for the CT liver image segmentation task is proposed. This structure includes a biomimetic attention module between the shallow encoder and the deep decoder, and the feature fusion proportion coefficient between the two is learned to enhance the attention of the overall network to the segmented target area. In addition, based on the traditional point sampling mechanism, an improved point sampling strategy is proposed for characterizing medical images to further enhance the edge segmentation effect of CT liver targets. The experimental results on the commonly used combined (CT-MR) health absolute organ segmentation (CHAOS) dataset show that the average dice similarity coefficient (DSC) can reach 0.9467, the average intersection over union (IOU) can reach 0.9623, and the average F1 score can reach 0.9351. This indicates that the model can effectively learn image detail features and global structural features, leading to improved segmentation of liver images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.