2004
DOI: 10.1503/cmaj.1031981
|View full text |Cite|
|
Sign up to set email alerts
|

Tips for learners of evidence-based medicine: 3. Measures of observer variability (kappa statistic)

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
310
0

Year Published

2005
2005
2020
2020

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 435 publications
(315 citation statements)
references
References 11 publications
5
310
0
Order By: Relevance
“…The Cohen's value was used as a measurement of the level of agreement beyond chance between the diagnoses made by physical examination and angiography (9,10). The details of the qualitative classification of values as degree of agreement beyond chance can be appreciated elsewhere (9,10).…”
Section: Statistical Analysesmentioning
confidence: 99%
See 1 more Smart Citation
“…The Cohen's value was used as a measurement of the level of agreement beyond chance between the diagnoses made by physical examination and angiography (9,10). The details of the qualitative classification of values as degree of agreement beyond chance can be appreciated elsewhere (9,10).…”
Section: Statistical Analysesmentioning
confidence: 99%
“…values between 0.4 and 0.6 denote a moderate agreement. Finally, values Ͼ0.6 denote a substantial agreement beyond chance (9). The sensitivity, specificity, positive predictive value, and negative predictive value of the physical examination in the diagnosis of AVF stenosis were also calculated.…”
Section: Statistical Analysesmentioning
confidence: 99%
“…The kappa index analysis evaluated the reproducibility by the degree of agreement between the results of the two observations, taking into account the proportion of observed in relation to the expected agreement. The identified kappa index of 0.96 is regarded as almost perfect agreement between both tests (McGinn et al, 2004).…”
Section: Discussionmentioning
confidence: 95%
“…While often used to measure classification performance, P o does not take into account the classification agreement that might be due to chance, [30]. A more conservative classification performance measure is the κ score [31,32], a chance-corrected measure of agreement between the desired and predicted class assignments, κ = (P o -P r )/(1 -P r ). If the classification agreement is due strictly to chance, κ = 0.…”
Section: Measuring Performancementioning
confidence: 99%