2012
DOI: 10.1016/j.neuroimage.2011.07.085
|View full text |Cite
|
Sign up to set email alerts
|

Foibles, follies, and fusion: Web-based collaboration for medical image labeling

Abstract: Labels that identify specific anatomical and functional structures within medical images are essential to the characterization of the relationship between structure and function in many scientific and clinical studies. Automated methods that allow for high throughput have not yet been developed for all anatomical targets or validated for exceptional anatomies, and manual labeling remains the gold standard in many cases. However, manual placement of labels within a large image volume such as that obtained using… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2012
2012
2016
2016

Publication Types

Select...
4
2
1

Relationship

3
4

Authors

Journals

citations
Cited by 12 publications
(14 citation statements)
references
References 22 publications
0
13
0
Order By: Relevance
“…An alternative approach is to use automatic segmentations as atlases, after applying a quality control step. Yet a different strategy is to harness the potential of non-expert segmenters (Bogovic et al, 2013; Bryan et al, 2014), for example, via a crowd-sourcing framework (Landman et al, 2012a; Maier-Hein et al, 2014). Although many biomedical segmentation problems rely on anatomical expertise, it is not clear whether this expertise has to be deployed in the delineation of every single atlas.…”
Section: Discussion and Future Directionsmentioning
confidence: 99%
“…An alternative approach is to use automatic segmentations as atlases, after applying a quality control step. Yet a different strategy is to harness the potential of non-expert segmenters (Bogovic et al, 2013; Bryan et al, 2014), for example, via a crowd-sourcing framework (Landman et al, 2012a; Maier-Hein et al, 2014). Although many biomedical segmentation problems rely on anatomical expertise, it is not clear whether this expertise has to be deployed in the delineation of every single atlas.…”
Section: Discussion and Future Directionsmentioning
confidence: 99%
“…The remaining 7 volumes were used as training data and were used to compute initial estimates of the performance level parameters (see [20] for details). Both STAPLE and Spatial STAPLE used these initial parameter estimates using a bias value of κ = 1.…”
Section: Methods and Resultsmentioning
confidence: 99%
“…Figures 5 and 6) where the foibles an follies of minimally trained raters are taken into account. For example, the concept of crowd-sourcing the labeling problem using minimally trained raters has gained traction [20, 33]. Through simple manipulation of the seed points and window sizes, Spatial STAPLE provides a valuable resource that could, for instance, be used to explicitly model a rater's learning curve, degradation over time, and general human mistakes that are often present in the labeling process.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Crowd-sourced, collaborative labeling has the potential to exploit the “wisdom of the crowd” and avoid strict requirements for particularly “wise” individuals. The recently presented WebMILL framework promises to bring together individuals (i.e., “millers”) independent of physical location and use statistical fusion to combine their results [1]. This potentially transformative technology presents a new set of challenges: investigators must pose the labeling tasks in a manner accessible to people with little or no background in medical imaging and who cannot be expected to read detailed instructions.…”
Section: Introductionmentioning
confidence: 99%