2004
DOI: 10.2519/jospt.2004.34.8.430
|View full text |Cite
|
Sign up to set email alerts
|

The Interrater Reliability Among Physical Therapists Newly Trained in a Classification System for Acute Low Back Pain

Abstract: Study Design: A prospective methodological interrater reliability study. Objectives: To calculate the interrater reliability among clinicians newly trained in a classification system for acute low back pain and to determine the level of agreement at key junctures within the classification algorithm. Background: The utility of a classification system for patients with low back pain depends on its reliability and generalizability. To be practical, clinicians must be able to apply the system after a reasonable am… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2007
2007
2018
2018

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 34 publications
(2 citation statements)
references
References 31 publications
0
2
0
Order By: Relevance
“…Kiesel et al [22] achieved a slightly higher kappa value (0.65) with 8 PTs who were familiar with the classification system and 30 patients with LBP. Heiss et al [23], in a study in which 45 acute LBP patients were classified by four PTs who were unfamiliar with the classification system, found a low kappa value (0.15). Finally, Fritz et al [24] investigated the inter-rater reliability of the system in a vignette study with 30 PTs and 123 LBP patients, and reported a kappa value of 0.60.…”
Section: Introductionmentioning
confidence: 99%
“…Kiesel et al [22] achieved a slightly higher kappa value (0.65) with 8 PTs who were familiar with the classification system and 30 patients with LBP. Heiss et al [23], in a study in which 45 acute LBP patients were classified by four PTs who were unfamiliar with the classification system, found a low kappa value (0.15). Finally, Fritz et al [24] investigated the inter-rater reliability of the system in a vignette study with 30 PTs and 123 LBP patients, and reported a kappa value of 0.60.…”
Section: Introductionmentioning
confidence: 99%
“…Furthermore, the use of paper cases removes additional confounding factors such as exam performance by the clinician that can influence reliability outcomes and thus allows the examination of the classification algorithm itself. Other studies, in contrast, have actually had raters examine patients repeatedly (Wilson et al, 1999; Heiss et al, 2004; Petersen et al, 2004; Fritz et al, 2006; Trudelle-Jackson et al, 2008; Harris-Hayes and Van Dillen, 2009; Vibe Fersum et al, 2009). The same patients are examined by different clinicians to determine if similar clinical results are obtained and if assignment to a particular classification category by different examiners is the same for each patient.…”
Section: Discussionmentioning
confidence: 99%