2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2023
DOI: 10.1109/wacv56688.2023.00613
|View full text |Cite
|
Sign up to set email alerts
|

From Forks to Forceps: A New Framework for Instance Segmentation of Surgical Instruments

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 13 publications
(3 citation statements)
references
References 44 publications
0
3
0
Order By: Relevance
“…Comparison methods. We have involved several classical and recent methods, including the vanilla UNet [13], Ter-nausNet [10], MF-TAPNet [12], Islam et al [14], Wang et al [15], ST-MTL [16], S-MTL [17], AP-MTL [18], ISINet [11], TraSeTR [19], and S3Net [20] for surgical binary and instrumentwise segmentation. The ViT-H-based SAM [2] is employed in all our investigations.…”
Section: Type Methodsmentioning
confidence: 99%
“…Comparison methods. We have involved several classical and recent methods, including the vanilla UNet [13], Ter-nausNet [10], MF-TAPNet [12], Islam et al [14], Wang et al [15], ST-MTL [16], S-MTL [17], AP-MTL [18], ISINet [11], TraSeTR [19], and S3Net [20] for surgical binary and instrumentwise segmentation. The ViT-H-based SAM [2] is employed in all our investigations.…”
Section: Type Methodsmentioning
confidence: 99%
“…ISINet introduces mask classification to instrument segmentation with Mask-RCNN (González, Bravo-Sánchez, and Arbelaez 2020; He et al 2017). Later, Baby et al (2023) improve its classification performance by designing a specialised classification module. In addition, TraSeTR integrates tracking cues with a track-to-segment transformer (Zhao, Jin, and Heng 2022) and MATIS incorporates temporal consistency with Mask2Former (Ayobi et al 2023;Cheng et al 2022).…”
Section: Related Work Surgical Instrument Segmentationmentioning
confidence: 99%
“…We use the EndoVis2018 (Allan et al 2020) For evaluation, we follow prior research and adopt three segmentation metrics: Challenge IoU (Allan et al 2019), IoU, and mean class IoU (mc IoU) (González, Bravo-Sánchez, and Arbelaez 2020; Baby et al 2023;Ayobi et al 2023). The efficiency of our method is evaluated in terms of training speed, training GPU usage, and inference speed.…”
Section: Datasets and Evaluationmentioning
confidence: 99%