2005
DOI: 10.1191/0962280205sm412oa
|View full text |Cite
|
Sign up to set email alerts
|

Chance-corrected measures of reliability and validity in K K tables

Abstract: When studying the degree of overall agreement between the nominal responses of two raters, it is customary to use the coefficient kappa. A more detailed analysis requires the evaluation of the degree of agreement category by category, and this is carried out in two different ways: using the value of kappa in the collapsed table for each category or using the agreement index for each category (proportion of agreements observed). Both indices have disadvantages: the former is sensitive to marginal totals; the la… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
17
0

Year Published

2007
2007
2019
2019

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 27 publications
(18 citation statements)
references
References 15 publications
1
17
0
Order By: Relevance
“…If marginal totals are very small or very unbalanced, the resulting kappa can be paradoxically high or low as compared with the proportion of observed agreement. 30 , 31 Delta is an alternative chance-corrected measure of validity that is not sensitive to marginal totals but will be similar to kappa when marginal totals are not excessively unbalanced. 31 Because of the small numbers in this study, delta values were calculated in addition to kappa values using the program written by Martin and Femia.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…If marginal totals are very small or very unbalanced, the resulting kappa can be paradoxically high or low as compared with the proportion of observed agreement. 30 , 31 Delta is an alternative chance-corrected measure of validity that is not sensitive to marginal totals but will be similar to kappa when marginal totals are not excessively unbalanced. 31 Because of the small numbers in this study, delta values were calculated in addition to kappa values using the program written by Martin and Femia.…”
Section: Methodsmentioning
confidence: 99%
“…31 Because of the small numbers in this study, delta values were calculated in addition to kappa values using the program written by Martin and Femia. 31 …”
Section: Methodsmentioning
confidence: 99%
“…The degree of concordance between cytology, PCR of HR-HPV, and biopsy results was analyzed using the Kappa index. The results of the test were evaluated using the classification of Landis and Koch in which a value of k<0.20 would be considered poor; 0.21 to 0.40 weak; 0.41 to 0.60 moderate; 0.61 to 0.80 good; and 0.81 to 1.00 very good [21].…”
Section: Methodsmentioning
confidence: 99%
“…However, there was a higher incidence of M1a disease in patients staged with the blind endoprobe (p = 0.007), arguably reflecting larger tumour burden in patients with stenotic tumours, and consistent with the comparatively more advanced T stages. It is regrettable that previous studies do not always report weighted kappa values, as they have long been proven to accurately predict the strength of agreement between the nominal responses of two raters [29,30]. Moreover, this method has proven to be accurate when applied to medical staging investigations [31,32], especially when compared with histopathology [33].…”
Section: Discussionmentioning
confidence: 93%