1996
DOI: 10.1016/s0022-3956(96)00033-7
|View full text |Cite
|
Sign up to set email alerts
|

Comparing correlated kappas by resampling: Is one level of agreement significantly different from another?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
60
0

Year Published

2000
2000
2016
2016

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 85 publications
(64 citation statements)
references
References 41 publications
4
60
0
Order By: Relevance
“…Because the NWD was common to both n coefficients, comparison of independent n coefficients may not be appropriate (19). Therefore, a method similar to that suggested for comparing correlated ns using the bootstrap technique was used (20). Bootstrap samples were drawn with replacement from each of the three sets of genes (NWD, Apc, and Muc2).…”
Section: Methodsmentioning
confidence: 99%
“…Because the NWD was common to both n coefficients, comparison of independent n coefficients may not be appropriate (19). Therefore, a method similar to that suggested for comparing correlated ns using the bootstrap technique was used (20). Bootstrap samples were drawn with replacement from each of the three sets of genes (NWD, Apc, and Muc2).…”
Section: Methodsmentioning
confidence: 99%
“…Kappa values were interpreted using definitions outlined by Landis and Koch (1977) To examine if the level of education influenced the reliability, differences in Fleiss kappa values between Credentialed and Diploma therapists were compared. A bootstrap method with a 1000 samples was utilized and Fleiss kappa coefficients were calculated separately for the Credential and Diploma raters for each of these samples (McKenzie et al, 1996). The differences between the Fleiss kappa coefficients were determined.…”
Section: Discussionmentioning
confidence: 99%
“…However, these measurements are apparently not independent and hence no usual g-sample inference tool nor the test procedure proposed in Section 2 can be directly applied. Donner et al (2000), McKenzie et al (1996) studied the comparison of two dependent (correlated) kappas. However, the categorical scales used in their studies were restricted to binary.…”
Section: Comparison Of G Dependent Agreementsmentioning
confidence: 99%
“…However, these kappas are apparently not independent and hence no usual k-sample inference tools can be directly applied. McKenzie et al (1996) studied the comparison of two kappa statistics obtained from each of two dependent 2 by 2 tables. Donner et al (2000) studied the test for the equality of two dependent kappa statistics.…”
Section: Introductionmentioning
confidence: 99%