2020
DOI: 10.1148/ryai.2020190211
|View full text |Cite|
|
Sign up to set email alerts
|

Construction of a Machine Learning Dataset through Collaboration: The RSNA 2019 Brain CT Hemorrhage Challenge

Abstract: I ntracranial hemorrhage is a potentially life-threatening problem that has many direct and indirect causes. Accuracy in diagnosing the presence and type of intracranial hemorrhage is a critical part of effective treatment. Diagnosis is often an urgent procedure requiring review of medical images by highly trained specialists and sometimes necessitating confirmation through clinical history, vital signs, and laboratory examinations. The process is complicated and requires immediate identification for optimal t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
90
0
3

Year Published

2020
2020
2023
2023

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 134 publications
(93 citation statements)
references
References 8 publications
0
90
0
3
Order By: Relevance
“…The Radiological Society of North America (RSNA) 2019 Brain CT Hemorrhage dataset [ 28 ] was built from scratch for the 2019 RSNA Intracranial Hemorrhage Detection challenge held on Kaggle ( ). The dataset is comprised of CT scans from three institutions: Stanford University, Universidade Federal de Sao Paulo and Thomas Jeffereson University Hospital.…”
Section: Experiments and Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…The Radiological Society of North America (RSNA) 2019 Brain CT Hemorrhage dataset [ 28 ] was built from scratch for the 2019 RSNA Intracranial Hemorrhage Detection challenge held on Kaggle ( ). The dataset is comprised of CT scans from three institutions: Stanford University, Universidade Federal de Sao Paulo and Thomas Jeffereson University Hospital.…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…Considering that there are 22 mistakes in common, we have strong reasons to suspect that the ground-truth labels might actually be wrong. As the ground-truth labels are also given by specialists [ 28 ], this is a likely explanation for the high overlap between the mistakes of doctor #2 and those of our CNN model. After seeing the ground-truth labels and making a careful reassessment, our team of doctors found at least 25 ground-truth labels that are wrong and another 5 that are disputable.…”
Section: Assessment By Radiologists and Discussionmentioning
confidence: 99%
See 3 more Smart Citations