2023
DOI: 10.48550/arxiv.2301.04882
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

ZScribbleSeg: Zen and the Art of Scribble Supervised Medical Image Segmentation

Abstract: Curating a large scale fully-annotated dataset can be both labour-intensive and expertise-demanding, especially for medical images.To alleviate this problem, we propose to utilize solely scribble annotations for weakly supervised segmentation. Existing solutions mainly leverage selective losses computed solely on annotated areas and generate pseudo gold standard segmentation by propagating labels to adjacent areas. However, these methods could suffer from the inaccurate and sometimes unrealistic pseudo segment… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 43 publications
0
2
0
Order By: Relevance
“…In the general computer vision literature several recent articles propose the use of these types of annotations, specially in the field of medical image segmentation. Scribble annotations are used in [48]- [51], points in [52]- [54] and bounding boxes in [55]- [58]. In Marine Science applications the literature on weakly supervised methods is scarce.…”
Section: Related Workmentioning
confidence: 99%
“…In the general computer vision literature several recent articles propose the use of these types of annotations, specially in the field of medical image segmentation. Scribble annotations are used in [48]- [51], points in [52]- [54] and bounding boxes in [55]- [58]. In Marine Science applications the literature on weakly supervised methods is scarce.…”
Section: Related Workmentioning
confidence: 99%
“…Following [29], we utilize a Gaussian kernel function to design the low-level weight function 𝜔 low , which is defined by the distinction between two pixels in terms of image intensity value 𝑣 and spatial location 𝑙…”
Section: Affinity Lossmentioning
confidence: 99%