Medical Imaging 2020: Imaging Informatics for Healthcare, Research, and Applications 2020
DOI: 10.1117/12.2551279
|View full text |Cite
|
Sign up to set email alerts
|

WeLineation: crowdsourcing delineations for reliable ground truth estimation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
1

Relationship

4
2

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 0 publications
0
6
0
Order By: Relevance
“…Experts could then revise these initial annotations. Note that the utility of crowdsourcing in medical image annotation has been demonstrated in multiple studies (Foncubierta-Rodriguez and Muller, 2012;Gurari et al, 2015;Sharma et al, 2017;Goel et al, 2020).…”
Section: Discussion and Future Researchmentioning
confidence: 97%
“…Experts could then revise these initial annotations. Note that the utility of crowdsourcing in medical image annotation has been demonstrated in multiple studies (Foncubierta-Rodriguez and Muller, 2012;Gurari et al, 2015;Sharma et al, 2017;Goel et al, 2020).…”
Section: Discussion and Future Researchmentioning
confidence: 97%
“…Our investigation is based on a dataset that was created using the "WeLineation" crowdsourcing platform for segmentation. 8 Within this study, mostly inexperienced users were performing the segmentations, only instructed by a short written description and one guidance figure (Fig 1). The dataset is composed of 75 photographs of human eyes (Fig.…”
Section: Methodsmentioning
confidence: 99%
“…In previous work, we have established a web-based platform that allows numerous users (the crowd) to provide reference delineation of objects in medical images. 8 In contrast to previous STAPLE investigations, where references were mostly generated by just a few domain experts (one up to three), we examined the results using many ratings per image (30 and more) but generated from mostly inexperienced users.…”
Section: Introductionmentioning
confidence: 99%
“…Crowd sourcing is a form of subjective consensus reference standard that has been applied to image annotation, image segmentation, and object delineation tasks. 70 It has been shown, in certain settings, that the quality of annotations from experts and those from novices becomes equivalent with an increased number of novices. 71,72 Nevertheless, the use of crowd sourcing as a reference standard for machine-learning applications in medical imaging must be further investigated before it can be recommended for general use.…”
Section: Crowd Sourcingmentioning
confidence: 99%