2015
DOI: 10.1016/j.injury.2014.02.039
|View full text |Cite
|
Sign up to set email alerts
|

An assessment of the inter-rater reliability of the ASA physical status score in the orthopaedic trauma population

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

6
29
0
4

Year Published

2016
2016
2021
2021

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 39 publications
(39 citation statements)
references
References 22 publications
6
29
0
4
Order By: Relevance
“…An interpretation of Fleiss’ kappa has not yet been established because the number of categories and subjects affects the magnitude of the value. Ihejirika and colleagues analyzed the inter‐rater reliability of ASA‐PS using nine scenarios, similar to the present study, and obtained moderate agreement (κ = 0.51). They concluded that substantial agreement strength for reliability was achieved .…”
Section: Discussionsupporting
confidence: 75%
See 3 more Smart Citations
“…An interpretation of Fleiss’ kappa has not yet been established because the number of categories and subjects affects the magnitude of the value. Ihejirika and colleagues analyzed the inter‐rater reliability of ASA‐PS using nine scenarios, similar to the present study, and obtained moderate agreement (κ = 0.51). They concluded that substantial agreement strength for reliability was achieved .…”
Section: Discussionsupporting
confidence: 75%
“…Ihejirika and colleagues analyzed the inter‐rater reliability of ASA‐PS using nine scenarios, similar to the present study, and obtained moderate agreement (κ = 0.51). They concluded that substantial agreement strength for reliability was achieved . The present study achieved a higher value for F‐ κ (0.55) than that reported by Ihejirika and colleagues .…”
Section: Discussionsupporting
confidence: 75%
See 2 more Smart Citations
“…The decrease in the kappa value associated with increasing the level of subclassification is consistent with results from other nested classification systems in the orthopaedic literature. 19-21 When these independent assessments were pooled in the final consensus meeting, it was noted that critical pieces of information, such as cultures or laboratory results that would be required for classification under the CDC system, were either not recorded or not routinely collected in standard patient follow-up. As such, the results of the study do not support using the full CDC classification system to differentiate between levels of post-operative infection when chart data are reviewed retrospectively.…”
Section: Discussionmentioning
confidence: 99%