2008
DOI: 10.1590/s1516-31802008000300008
|View full text |Cite
|
Sign up to set email alerts
|

Are distal radius fracture classifications reproducible? Intra and interobserver agreement

Abstract: CONTEXT AND OBJECTIVE: Various classifi cation systems have been proposed for fractures of the distal radius, but the reliability of these classifications is seldom addressed. For a fracture classification to be useful, it must provide prognostic signifi cance, interobserver reliability and intraobserver reproducibility. The aim here was to evaluate the intraobserver and interobserver agreement of distal radius fracture classifi cations. DESIGN AND SETTING:This was a validation study on interobserver and intra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

6
59
1
9

Year Published

2012
2012
2023
2023

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 64 publications
(75 citation statements)
references
References 26 publications
6
59
1
9
Order By: Relevance
“…4 A reliable distal radius fracture classification is necessary for systematic treatment of these fractures and is essential for comparing the results from different clinical studies. 12,13,15,19,31,32 In the present study, the intraobserver and interobserver reproducibility of the IDEAL classification was generally higher than that of the established classifications. We believe that this classification system is more reliable because of clearness in assessing the classification features.…”
Section: Discussioncontrasting
confidence: 58%
See 2 more Smart Citations
“…4 A reliable distal radius fracture classification is necessary for systematic treatment of these fractures and is essential for comparing the results from different clinical studies. 12,13,15,19,31,32 In the present study, the intraobserver and interobserver reproducibility of the IDEAL classification was generally higher than that of the established classifications. We believe that this classification system is more reliable because of clearness in assessing the classification features.…”
Section: Discussioncontrasting
confidence: 58%
“…An effective classification system must be valid, reliable and reproducible, but it should also standardize a language for consistent communication, provide guidelines for appropriate treatment, indicate the likelihood of complications and fracture instability and predict a realistic prognosis for each fracture. 15 The system should also provide a mechanism for evaluating and comparing treatment results with results from similar fractures in different centers reported at different times. 16,17 Currently, none of the classification systems available have reproducibility that adequately provides evidence for treatment and prognosis.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Although this matter had previously been evaluated in concordance studies involving fracture classifications, it had not been done in studies measuring patellar height (28) . The reliability of the IS and CD indices was congruent with what is seen in the literature, which strengthens our results.…”
Section: Discussionmentioning
confidence: 99%
“…It has shown fair to moderate reliability (k = 0.226-0.41) but fair to substantial reproducibility (k = 0.49-0.621). 7,20,21 Conclusion Classification systems in their most useful form can be tools for succinct rapid communication in clinical contexts between healthcare professionals. Further, when they elude to treatment, recovery or prognosis, they become an invaluable tool for a clinician and can often guide or polarise treatment strategy.…”
Section: Universal (1993)mentioning
confidence: 99%