2022
DOI: 10.1007/978-3-031-13841-6_6
|View full text |Cite
|
Sign up to set email alerts
|

Automated Vein Segmentation from NIR Images Using a Mixer-UNet Model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 10 publications
0
3
0
Order By: Relevance
“…There are two approaches which are present in U-Net architecture for segmenting biomedical pictures. The initial approach which is also known as encoder shortens.Context is captured by the encoder using a small feature map [35][36][37]. Max-pooling and convolution layers like Vgg-16 make up the encoder.…”
Section: Segmentationmentioning
confidence: 99%
“…There are two approaches which are present in U-Net architecture for segmenting biomedical pictures. The initial approach which is also known as encoder shortens.Context is captured by the encoder using a small feature map [35][36][37]. Max-pooling and convolution layers like Vgg-16 make up the encoder.…”
Section: Segmentationmentioning
confidence: 99%
“…The prototype was designed based on binocular near-infrared imaging and pressure sensing technologies. In 2021, Qi et al from Tongji University developed a compact venipuncture robot VeniBot [16, 17], adopting binocular near-infrared and ultrasound for vascular imaging. They also proposed a novel deep-learning algorithm for automatic navigation of puncture devices.…”
Section: Introductionmentioning
confidence: 99%
“…Current research in the field of intelligent injection robotics focuses on vein imaging [ 10 , 11 ], detection and segmentation [ 12 , 13 ], decision on needle insertion point location [ 14 , 15 ], and robot system design [ 16 , 17 ]. At the same time, fewer studies are related to the needle insertion angle for dorsal hand intravenous injection robots.…”
Section: Introductionmentioning
confidence: 99%