1989
DOI: 10.1093/ptj/69.11.970
|View full text |Cite
|
Sign up to set email alerts
|

Kappa Coefficient Calculation Using Multiple Ratings Per Subject: A Special Communication

Abstract: The purpose of this special communication is to describe the application of the Kappa coefficient for the estimation of interobserver agreement. Kappa is a preferred statistic for the estimation of the accuracy of nominal and ordinal data in clinical research by physical therapists. A brief introduction to the properties of the Kappa coefficient is given, and a special case of Kappa for multiple ratings per subject is explained. A FORTRAN program, written specifically for the multiple-ratings situation, is des… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
33
0
1

Year Published

1996
1996
2021
2021

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 78 publications
(36 citation statements)
references
References 16 publications
2
33
0
1
Order By: Relevance
“…This calculation provides a reasonable estimate of the reliability of dichotomous pass/fail data. 1,25 As cited by several clinical research publications, ICC and Kappa values above 0.75 should be considered representative of high levels of reliability, while values between 0.4 and 0.75 are indicative of a fair-to-moderate level of reliability. ICC values below 0.4 should be considered representative of a poor level of reliability.…”
Section: Discussionmentioning
confidence: 98%
“…This calculation provides a reasonable estimate of the reliability of dichotomous pass/fail data. 1,25 As cited by several clinical research publications, ICC and Kappa values above 0.75 should be considered representative of high levels of reliability, while values between 0.4 and 0.75 are indicative of a fair-to-moderate level of reliability. ICC values below 0.4 should be considered representative of a poor level of reliability.…”
Section: Discussionmentioning
confidence: 98%
“…Inter-observer agreement and agreement with the reference agreement were estimated using different measures of agreement [27] , simple proportions of agreement (Pa) and proportions of specific agreement, and quadratic weighted Cohen's kappa coefficient (Kc) (estimated by intra-class correlation coefficient) [28][29][30][31][32][33] . The confidence intervals for proportions of agreements were estimated with binomial distribution [33] .…”
Section: Discussionmentioning
confidence: 99%
“…Almutairi [28] stated that the results of the accuracy analysis can be summarized by an overall accuracy percentage and a kappa statistic. In this study, the overall accuracy and kappa coefficient [29,30] were introduced to the study. Here, overall accuracy is defined to be the percentage of pixels correctly detected.…”
Section: Accuracy Assessment Methodsmentioning
confidence: 99%
“…Here, overall accuracy is defined to be the percentage of pixels correctly detected. Let p a be the proportion of agreement, p e be the chance agreement, and the kappa coefficient is defined as the proportion of agreement among raters after chance agreement has been removed [30], which can be expressed as: …”
Section: Accuracy Assessment Methodsmentioning
confidence: 99%