2003
DOI: 10.1016/s0883-5403(03)00107-4
|View full text |Cite
|
Sign up to set email alerts
|

A comparison of the reliability and validity of bone stock loss classification systems used for revision hip surgery

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

1
82
1
5

Year Published

2010
2010
2023
2023

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 63 publications
(89 citation statements)
references
References 11 publications
1
82
1
5
Order By: Relevance
“…Intraobserver j values for the expert group were poor to moderate (j = 0.14 and 0.33 for two separate reviewers in one study [4], and average j = 0.46 [0.27-0.60] for a cohort of three reviewers in another study [1]), whereas the nonexpert group was fair to moderate (j = 0.31 and 0.41 for two separate reviewers in one study [4], and average j = 0.37 (0.26-0.50) for a cohort of three reviewers in another study [1]), unexpectedly showing greater variability in the experts' analysis and no substantial improvement over the nonexperts' evaluation. Interobserver reliability was similarly inconsistent; expert j values were substantially improved compared with nonexperts in one study (expert, j = 0.56; nonexpert, j = 0.27) [4], and approximately equivalent in another study (experts, j = 0.25 and 0.27; nonexperts, j = 0.18 and 0.31) [1].…”
Section: Reliability and Validitymentioning
confidence: 93%
See 4 more Smart Citations
“…Intraobserver j values for the expert group were poor to moderate (j = 0.14 and 0.33 for two separate reviewers in one study [4], and average j = 0.46 [0.27-0.60] for a cohort of three reviewers in another study [1]), whereas the nonexpert group was fair to moderate (j = 0.31 and 0.41 for two separate reviewers in one study [4], and average j = 0.37 (0.26-0.50) for a cohort of three reviewers in another study [1]), unexpectedly showing greater variability in the experts' analysis and no substantial improvement over the nonexperts' evaluation. Interobserver reliability was similarly inconsistent; expert j values were substantially improved compared with nonexperts in one study (expert, j = 0.56; nonexpert, j = 0.27) [4], and approximately equivalent in another study (experts, j = 0.25 and 0.27; nonexperts, j = 0.18 and 0.31) [1].…”
Section: Reliability and Validitymentioning
confidence: 93%
“…The kappa (j) analysis is a representation of the proportion of agreement beyond that expected by chance [10]; a value of zero indicates no agreement better than chance and a value of 1 indicates agreement between two scorers is perfect [10]. Intraobserver reliability for the Paprosky classification has varied with reported j values ranging from 0.14 to 0.75 [1,4,12,19]. Most of the time the values were between 0.3 and 0.6, indicating poor to good agreement.…”
Section: Reliability and Validitymentioning
confidence: 99%
See 3 more Smart Citations