2015
DOI: 10.1590/0004-282x20150080
|View full text |Cite
|
Sign up to set email alerts
|

Indices of agreement between neurosurgeons and a radiologist in interpreting tomography scans in an emergency department

Abstract: The power of interpretation in the analysis of cranial computed tomography (CCT) among neurosurgeons and radiologists has rarely been studied. This study aimed to assess the rate of agreement in the interpretation of CCTs between neurosurgeons and a radiologist in an emergency department. Method 227 CCT were independently analyzed by two neurosurgeons (NS1 and NS2) and a radiologist (RAD). The level of agreement in interpreting the examination was studied. Results The Kappa values obtained between NS1 and NS2 … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 10 publications
0
1
0
Order By: Relevance
“…One previous study found good agreement between a radiologist and neurosurgeons in terms of the overall interpretation of computed tomography scans of the brain, although the level of agreement was found to be poor for assessments of leukoaraiosis and reduced brain volume ( 21 ) . Subtle MRI findings tend to result in poor inter-rater reliability, although there are divergent results in the literature regarding the exact degree of inter-rater reliability for each finding or medical specialty.…”
Section: Discussionmentioning
confidence: 97%
“…One previous study found good agreement between a radiologist and neurosurgeons in terms of the overall interpretation of computed tomography scans of the brain, although the level of agreement was found to be poor for assessments of leukoaraiosis and reduced brain volume ( 21 ) . Subtle MRI findings tend to result in poor inter-rater reliability, although there are divergent results in the literature regarding the exact degree of inter-rater reliability for each finding or medical specialty.…”
Section: Discussionmentioning
confidence: 97%