2017
DOI: 10.1007/978-3-319-67534-3_10
|View full text |Cite
|
Sign up to set email alerts
|

Towards an Efficient Way of Building Annotated Medical Image Collections for Big Data Studies

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(17 citation statements)
references
References 5 publications
0
17
0
Order By: Relevance
“…Classify (Albarqouni et al, 2016a), (Brady et al, 2014), (Brady et al, 2017), , (dos Reis et al, 2015), (Eickhoff, 2014), (Foncubierta Rodríguez and Müller, 2012), (Gur et al, 2017), (de Herrera et al, 2014), (Holst et al, 2015), (Huang and Hamarneh, 2017), (Keshavan et al, 2018), (Lawson et al, 2017), (Malpani et al, 2015), (Mavandadi et al, 2012, (Mitry et al, 2013), (Mitry et al, 2015), (Nguyen et al, 2012), (Park et al, 2016), (Park et al, 2017), (Smittenaar et al, 2018), (Sonabend et al, 2017), (Sullivan et al, 2018) Segment (Roethlingshoefer et al, 2017), (Boorboor et al, 2018), (Bruggemann et al, 2018), (Cabrera-Bean et al, 2017), (Chávez-Aragón et al, 2013), (Cheplygina et al, 2016), (Ganz et al, 2017), (Gurari et al, 2015b), (Heller et al, 2017), (Irshad et al, 2015), (Lee and Tufail, 2014), (Lee et al, 2016), (Lejeune et al, 2017), (Luengo-Oroz et al, 2012), (Maier-Hein et al, 2014a), (Maier-Hein et al, 2016), (O'Neil et al, 2017), (Park et al, 2018),…”
Section: Task Papersmentioning
confidence: 99%
See 2 more Smart Citations
“…Classify (Albarqouni et al, 2016a), (Brady et al, 2014), (Brady et al, 2017), , (dos Reis et al, 2015), (Eickhoff, 2014), (Foncubierta Rodríguez and Müller, 2012), (Gur et al, 2017), (de Herrera et al, 2014), (Holst et al, 2015), (Huang and Hamarneh, 2017), (Keshavan et al, 2018), (Lawson et al, 2017), (Malpani et al, 2015), (Mavandadi et al, 2012, (Mitry et al, 2013), (Mitry et al, 2015), (Nguyen et al, 2012), (Park et al, 2016), (Park et al, 2017), (Smittenaar et al, 2018), (Sonabend et al, 2017), (Sullivan et al, 2018) Segment (Roethlingshoefer et al, 2017), (Boorboor et al, 2018), (Bruggemann et al, 2018), (Cabrera-Bean et al, 2017), (Chávez-Aragón et al, 2013), (Cheplygina et al, 2016), (Ganz et al, 2017), (Gurari et al, 2015b), (Heller et al, 2017), (Irshad et al, 2015), (Lee and Tufail, 2014), (Lee et al, 2016), (Lejeune et al, 2017), (Luengo-Oroz et al, 2012), (Maier-Hein et al, 2014a), (Maier-Hein et al, 2016), (O'Neil et al, 2017), (Park et al, 2018),…”
Section: Task Papersmentioning
confidence: 99%
“…Abdomen (Roethlingshoefer et al, 2017), (Heim, 2018), (Heller et al, 2017), (Maier-Hein et al, 2014a), (Maier-Hein et al, 2014b), (Maier-Hein et al, 2015), (Maier-Hein et al, 2016), (McKenna et al, 2012), (Nguyen et al, 2012), (Park et al, 2016), (Park et al, 2017), (Park et al, 2018), (Rajchl et al, 2017) Brain (Ganz et al, 2017), (Keshavan et al, 2018), (Rajchl et al, 2016), (Sonabend et al, 2017), (Timmermans et al, 2016) Eye (Brady et al, 2014), (Brady et al, 2017), (Lee and Tufail, 2014), (Lee et al, 2016), (Leifman et al, 2015), (Mitry et al, 2013), (Mitry et al, 2015), (Mitry et al, 2016) Heart (Gur et al, 2017) Histo (Albarqouni et al, 2016a), (Albarqouni et al, 2016b), (Bruggemann et al, 2018), (Cabrera-Bean et al, 2017), (Della Mea et al, 2014), (dos Reis et al, 2015), (Eickhoff, 2014), (Irshad et al, 2015), (Irshad et al, 2017), (Lawson et al, 2017), (Luengo-Oroz et al, 2012), (Mavandadi et al, 2012), (Sameki et al, 2016), (Sharma et al, 2017), ...…”
Section: Domain Papersmentioning
confidence: 99%
See 1 more Smart Citation
“…However, it has been pointed out that there is a pressing need to develop methods for tagging images with semantic descriptors, e.g. for decision support or context awareness [17], [18]. For example, context-aware augmented reality (AR) in surgery is becoming a topic of interest.…”
Section: A Related Workmentioning
confidence: 99%
“…To achieve generalization, deep learning models require large amounts of data that are accurately annotated. Obtaining such a dataset for a variety of medical images is challenging because expert annotation can be expensive, time consuming [19,30], and often limited by the subjective interpretation [24]. Moreover, other issues such as privacy and under-representation of rare conditions impede developing such datasets [54,44].…”
Section: Introductionmentioning
confidence: 99%