2017
DOI: 10.1117/12.2252281
|View full text |Cite
|
Sign up to set email alerts
|

Crowdsourcing for identification of polyp-free segments in virtual colonoscopy videos

Abstract: Virtual colonoscopy (VC) allows a physician to virtually navigate within a reconstructed 3D colon model searching for colorectal polyps. Though VC is widely recognized as a highly sensitive and specific test for identifying polyps, one limitation is the reading time, which can take over 30 minutes per patient. Large amounts of the colon are often devoid of polyps, and a way of identifying these polyp-free segments could be of valuable use in reducing the required reading time for the interrogating radiologist.… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 12 publications
(27 citation statements)
references
References 11 publications
0
27
0
Order By: Relevance
“…Recent studies have shown that crowdsourcing can produce comparable results to human experts at a faster pace [ 74 - 78 ]. Thus, crowdsourcing was used to assess the evaluation data set.…”
Section: Methodsmentioning
confidence: 99%
“…Recent studies have shown that crowdsourcing can produce comparable results to human experts at a faster pace [ 74 - 78 ]. Thus, crowdsourcing was used to assess the evaluation data set.…”
Section: Methodsmentioning
confidence: 99%
“…Classify (Albarqouni et al, 2016a), (Brady et al, 2014), (Brady et al, 2017), , (dos Reis et al, 2015), (Eickhoff, 2014), (Foncubierta Rodríguez and Müller, 2012), (Gur et al, 2017), (de Herrera et al, 2014), (Holst et al, 2015), (Huang and Hamarneh, 2017), (Keshavan et al, 2018), (Lawson et al, 2017), (Malpani et al, 2015), (Mavandadi et al, 2012, (Mitry et al, 2013), (Mitry et al, 2015), (Nguyen et al, 2012), (Park et al, 2016), (Park et al, 2017), (Smittenaar et al, 2018), (Sonabend et al, 2017), (Sullivan et al, 2018) Segment (Roethlingshoefer et al, 2017), (Boorboor et al, 2018), (Bruggemann et al, 2018), (Cabrera-Bean et al, 2017), (Chávez-Aragón et al, 2013), (Cheplygina et al, 2016), (Ganz et al, 2017), (Gurari et al, 2015b), (Heller et al, 2017), (Irshad et al, 2015), (Lee and Tufail, 2014), (Lee et al, 2016), (Lejeune et al, 2017), (Luengo-Oroz et al, 2012), (Maier-Hein et al, 2014a), (Maier-Hein et al, 2016), (O'Neil et al, 2017), (Park et al, 2018),…”
Section: Task Papersmentioning
confidence: 99%
“…Abdomen (Roethlingshoefer et al, 2017), (Heim, 2018), (Heller et al, 2017), (Maier-Hein et al, 2014a), (Maier-Hein et al, 2014b), (Maier-Hein et al, 2015), (Maier-Hein et al, 2016), (McKenna et al, 2012), (Nguyen et al, 2012), (Park et al, 2016), (Park et al, 2017), (Park et al, 2018), (Rajchl et al, 2017) Brain (Ganz et al, 2017), (Keshavan et al, 2018), (Rajchl et al, 2016), (Sonabend et al, 2017), (Timmermans et al, 2016) Eye (Brady et al, 2014), (Brady et al, 2017), (Lee and Tufail, 2014), (Lee et al, 2016), (Leifman et al, 2015), (Mitry et al, 2013), (Mitry et al, 2015), (Mitry et al, 2016) Heart (Gur et al, 2017) Histo (Albarqouni et al, 2016a), (Albarqouni et al, 2016b), (Bruggemann et al, 2018), (Cabrera-Bean et al, 2017), (Della Mea et al, 2014), (dos Reis et al, 2015), (Eickhoff, 2014), (Irshad et al, 2015), (Irshad et al, 2017), (Lawson et al, 2017), (Luengo-Oroz et al, 2012), (Mavandadi et al, 2012), (Sameki et al, 2016), (Sharma et al, 2017), ...…”
Section: Domain Papersmentioning
confidence: 99%
See 1 more Smart Citation
“…For some tasks, such as interpreting X-ray radiographs, large amounts of training data are already generated and archived under normal protocols, and these data can be used as is without need for additional annotations (Gale et al, 2017). When untrained workers perform moderately well, but not quite on par with experts, their annotations can be used to train a "first pass" model that identifies regions of interest (Park et al, 2017), or one that performs only those tasks that non-experts can do well (Heim et al, 2018). Researchers might have access to a community of knowledgeable, enthusiastic amateurs, such as those who enjoy identification of birds (Van Horn et al, 2015) or aircraft (Maji et al, 2013).…”
Section: Introductionmentioning
confidence: 99%