2021
DOI: 10.1155/2021/5520407
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Retinal Vessel Segmentation Based on an Improved U-Net Approach

Abstract: Retinal blood vessels are the only deep microvessels in the blood circulation system that can be observed directly and noninvasively, providing us with a means of observing vascular pathologies. Cardiovascular and cerebrovascular diseases, such as glaucoma and diabetes, can cause structural changes in the retinal microvascular network. Therefore, the study of effective retinal vessel segmentation methods is of great significance for the early diagnosis of cardiovascular diseases and the vascular network’s quan… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(9 citation statements)
references
References 49 publications
0
9
0
Order By: Relevance
“…Huang et al realized a supervised learning method using an improved U-Net network with 23 convolutional layers, and the accuracy of the DRIVE, STARE, and HRF datasets was 0.9701, 0.9683, and 0.9698, respectively. However, its area under the curve (AUC) was only 0.8895, 0.8845, and 0.8686 [ 20 ]. Liang et al fused the linear features, texture features, and the other features of retinal vessels to train a random forest classifier which realizes automatic segmentation of retinal vessels [ 9 ].…”
Section: Introductionmentioning
confidence: 99%
“…Huang et al realized a supervised learning method using an improved U-Net network with 23 convolutional layers, and the accuracy of the DRIVE, STARE, and HRF datasets was 0.9701, 0.9683, and 0.9698, respectively. However, its area under the curve (AUC) was only 0.8895, 0.8845, and 0.8686 [ 20 ]. Liang et al fused the linear features, texture features, and the other features of retinal vessels to train a random forest classifier which realizes automatic segmentation of retinal vessels [ 9 ].…”
Section: Introductionmentioning
confidence: 99%
“…The same U-shaped architecture of DUNet [25] and Sine-Net [27], the latter of which takes an up-sampling followed by down-sampling approach, this method largely compensates for the shortcomings of partial capillary segmentation in the former method. In addition to this, the HAnet [26] and Huang et al [3] methods are also improved U-shaped networks, with the difference that HAnet is designed with multiple decoders focusing on features in different regions. Although HAnet is not as accurate as of the segmentation of Huang et al's method, the vascular continuity of its method segmentation is stronger than that of the latter.…”
Section: Retinal Segmentation Results Of Different Methods On the Drive Datasetmentioning
confidence: 99%
“…The retinal vasculature is again the only deep microvasculature in the blood circulation system which can be directly and noninvasively visualized. It is extremely rich in information about its vascular characteristics [3]. More importantly, the morphological information related to the retinal vascular tree (e.g.…”
Section: Introductionmentioning
confidence: 99%
“…To begin with, the encoder is the most adapted and most changeable component of the UNet architecture. Since it is practically not possible to study each of the architectural variations in the encoder, we have therefore listed here the 23 variations (E1 to E23, representing encoder changes) along with their references in a tabular format and it is as follows: (E1) conventional system (Ronneberger) [43][44][45][46][47][48][49][50][51][52]90]; (E2) cascade of convolutions [77,91,99,116,117]; (E3) parallel convolutions (multiple convolution network) [57]; (E4) convolution with dropout [70,76,86,95,101,102,134,138]; (E5) Residual network [76,78,105,129,135,138,[149][150][151]; (E6) Xception encoder [56,88,112]; (E7) encoder layers with independent inputs [104,140]; (E8) squeeze excitation (SE) network [92,103,138]; (E9) pooling types (max pooling, global average pooling) [95]; (E10) input image dimension change with changing filter (channe...…”
Section: A Encoder Variationsmentioning
confidence: 99%
“…Note that the decoder receives input in many different ways, such as encoder, skip connection or data after data transmission via bridge network (or bottle neck). Using these fundamental changes, the decoder variations can be categorized into 16 different type listed as follows: (D1) convolution with dropout [70,76,86,95,101,102,134,138]; (D2) UNet++ type of change [130,144,154]; (D3) UNet+++ (UNet 3+) Full scale deep supervision [157]; (D4) Output from decoders to make a loss function [104,140]; (D5) fusion of the decoder outputs for scale adjustment [59,107]; (D6) recurrent residual [118,129,138]; (D7) residual block [75,84,88,105,138,150]; (D8) channel attention and scale attention block [65,113]; (D9) transpose convolution [66,88,94,95,139]; (D10) squeeze excitation (SE) Network [103,125]; (D11) cascade convolution [99]; (D12) addition of original image to each layer [100]; (D13) batch normalization [95,106,155]; (D14) inception block [97]; (D15) dense layer [87,91,…”
Section: B Decoder Variationsmentioning
confidence: 99%