2010
DOI: 10.3928/01477447-20100924-14
|View full text |Cite
|
Sign up to set email alerts
|

Evaluation of the Reliability of Classification Systems Used for Distal Radius Fractures

Abstract: The objective of this investigation was to evaluate the reliability of classification systems by determining inter- and intraobserver agreement in displaced distal radius fractures. Radiographs of 32 patients (21 men and 11 women with a mean age of 41.6 years) who presented with a displaced distal radius fracture were classified by 9 orthopedic surgeons (5-25 years experience) using 5 different classification systems (Fernandez, AO, Frykman, Melone, and Universal Classification systems) twice with 20-day inter… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
35
0

Year Published

2012
2012
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 52 publications
(35 citation statements)
references
References 18 publications
0
35
0
Order By: Relevance
“…1,6 In addition, we suggest that they should be simple, easy to remember and have acceptable interobserver agreement (reliability), intra-observer agreement (reproducibility) and validity. 7 Reliability and reproducibility are demonstrated by the ability of a classification to return the same result for a particular patient's data when shown to multiple observers (reliability) or to the same observer viewing the same patient's data at different time points (reproducibility). Validity is the accuracy with which the classification determines the true fracture type.…”
Section: Defining a Useful Classification Systemmentioning
confidence: 99%
See 3 more Smart Citations
“…1,6 In addition, we suggest that they should be simple, easy to remember and have acceptable interobserver agreement (reliability), intra-observer agreement (reproducibility) and validity. 7 Reliability and reproducibility are demonstrated by the ability of a classification to return the same result for a particular patient's data when shown to multiple observers (reliability) or to the same observer viewing the same patient's data at different time points (reproducibility). Validity is the accuracy with which the classification determines the true fracture type.…”
Section: Defining a Useful Classification Systemmentioning
confidence: 99%
“…It was shown to have slight to moderate reliability and reproducibility with kappa values of 0.337, 0.4-0.6, respectively, in one study 19 and 0.056, 0.262 in another study. 7 Observers had the greatest challenge in identifying the four fragments on x-ray. 19 …”
Section: Melone (1984)mentioning
confidence: 99%
See 2 more Smart Citations
“…Furthermore, there is poor inter and intra-observer reliability among distal radius classification systems. 11,12 A more relevant clinical outcome is whether the management following reviewing an image via MMS is reliable when compared to viewing the images via a PACS system. To that end, a retrospective study was designed to assess whether MMS was reliable in predicting the recommended management when compared to PACS.…”
Section: Introductionmentioning
confidence: 99%