2022
DOI: 10.1002/mp.15546
|View full text |Cite
|
Sign up to set email alerts
|

Curv‐Net: Curvilinear structure segmentation network based on selective kernel and multi‐Bi‐ConvLSTM

Abstract: Purpose Accurately segmenting curvilinear structures, for example, retinal blood vessels or nerve fibers, in the medical image is essential to the clinical diagnosis of many diseases. Recently, deep learning has become a popular technology to deal with the image segmentation task, and it has obtained remarkable achievement. However, the existing methods still have many problems when segmenting the curvilinear structures in medical images, such as losing the details of curvilinear structures, producing many fal… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 32 publications
0
6
0
Order By: Relevance
“…For example, Ronneberger et al [41] propose U-Net, which has been widely used in numerous medical image segmentation tasks. Existing curvilinear structure segmentation works focus on well-designed network architectures by introducing multi-scale [11], [42], multi-task [6], [9], [43], or various attention mechanisms [10], [44] and well-playing morphological and topological properties by introducing GANs or morphology-/topology-preserving loss functions [13], [14]. Still, data availability and annotation quality are the main limitations of these methods.…”
Section: A Curvilinear Structure Segmentationmentioning
confidence: 99%
See 2 more Smart Citations
“…For example, Ronneberger et al [41] propose U-Net, which has been widely used in numerous medical image segmentation tasks. Existing curvilinear structure segmentation works focus on well-designed network architectures by introducing multi-scale [11], [42], multi-task [6], [9], [43], or various attention mechanisms [10], [44] and well-playing morphological and topological properties by introducing GANs or morphology-/topology-preserving loss functions [13], [14]. Still, data availability and annotation quality are the main limitations of these methods.…”
Section: A Curvilinear Structure Segmentationmentioning
confidence: 99%
“…For DRIVE, we utilize the original division of 20 training samples and 20 testing samples. For CHASEDB1, we follow the division in [2], [11], with the first 20 images serving as the training set and the remaining 8 used for testing. For OCTA500, we respectively utilize 200, 10 and 90 samples as the training, validation and testing sets.…”
Section: A Datasets and Preprocessingmentioning
confidence: 99%
See 1 more Smart Citation
“…MDCCNet [23] designs a multiscale deep context convolutional network that combines multiscale features and restores object boundaries through dense connected CRF. Curv-Net [24] proposes a new U-shape network, which is composed of SK module and multi-Bi-ConvLSTM. e SK module is used to extract multiscale features, and multi-Bi-ConvLSTM is used to fuse feature information of deep and shallow stages.…”
Section: Semantic Segmentationmentioning
confidence: 99%
“…In recent years, benefiting from the development of deep learning (DL), many DL-based segmentation algorithms for curvilinear structures have been proposed and have shown overwhelming performance compared to traditional (e.g., matched filter-based and morphological processing-based (Nguyen et al, 2013;Singh and Srivastava, 2016)) methods. Most existing works are dedicated to designing sophisticated network architectures (Peng et al, 2021;Mou et al, 2021;He et al, 2022) and deploying strategies to preserve curvilinear structures' topology by employing generative adversarial networks (GANs) (Lin et al, 2021c;Son et al, 2019) or topologypreserving loss functions (Cheng et al, 2021a;Shit et al, 2021). These methods are typically fully-supervised, wherein largescale well-annotated datasets are required.…”
Section: Introductionmentioning
confidence: 99%