2019
DOI: 10.48550/arxiv.1902.09159
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Survey of Crowdsourcing in Medical Image Analysis

Abstract: Rapid advances in image processing capabilities have been seen across many domains, fostered by the application of machine learning algorithms to "big-data". However, within the realm of medical image analysis, advances have been curtailed, in part, due to the limited availability of large-scale, well-annotated datasets. One of the main reasons for this is the high cost often associated with producing large amounts of high-quality meta-data. Recently, there has been growing interest in the application of crowd… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
2

Relationship

2
5

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 36 publications
0
7
0
Order By: Relevance
“…The aforementioned ImageNet dataset [29] has been annotated using a crowd-sourcing platform (Amazon Mechanical Turks). Although there may be few medical experts within the labelling group, crowd-sourcing has been shown to be effective in creating large quantities of annotated data; and, it is faster and cheaper than annotation by medical experts [65]. In neuroscience research, crowdsourcing and gamification have helped neuroscientists to explore brain networks by identifying neurons and their synaptic connections [66,67].…”
Section: Obtaining Data Annotationsmentioning
confidence: 99%
“…The aforementioned ImageNet dataset [29] has been annotated using a crowd-sourcing platform (Amazon Mechanical Turks). Although there may be few medical experts within the labelling group, crowd-sourcing has been shown to be effective in creating large quantities of annotated data; and, it is faster and cheaper than annotation by medical experts [65]. In neuroscience research, crowdsourcing and gamification have helped neuroscientists to explore brain networks by identifying neurons and their synaptic connections [66,67].…”
Section: Obtaining Data Annotationsmentioning
confidence: 99%
“…Here we investigate the possibility to train our model using multiple scribbles per training image. This scenario simulates crowdsourcing applications, which have shown to be useful for annotating rare classes or to exploit various levels of expertise in annotators [8], [47]. Here, we mimic the availability of scribble annotations collected by three different "sources", using: i) expert-made scribbles; ii) scribbles approximated by skeletonization of the segmentation masks; iii) scribbles approximated by a random walk in the masks (see Section IV-A for a description of ii) and iii)).…”
Section: Combining Multiple Scribbles: Simulating Crowdsourcingmentioning
confidence: 99%
“…Thus, we consider multiple times pixels that are labeled across annotators, while considering 'once' pixels labeled only from one annotator. Other ways of combining annotations are also possible (e.g, considering the union of the scribbles, or weighting differently each annotator [47]), but they are out of the scope of this manuscript.…”
Section: Combining Multiple Scribbles: Simulating Crowdsourcingmentioning
confidence: 99%
“…True labels can be estimated from multiple imperfect reference labels with expectation minimization (EM) [10,11,12,13]. Creating multiple segmentation annotations is typically too expensive but lower quality results can be obtained with crowdsourcing [15]. Indeed, it has been demonstrated that the most efficient labeling strategy is to collect one high quality label per example for many examples and then estimate the true labels with model-bootstrapped EM [14].…”
Section: Introductionmentioning
confidence: 99%