2017
DOI: 10.1186/s12874-017-0340-6
|View full text |Cite
|
Sign up to set email alerts
|

Kappa statistic to measure agreement beyond chance in free-response assessments

Abstract: BackgroundThe usual kappa statistic requires that all observations be enumerated. However, in free-response assessments, only positive (or abnormal) findings are notified, but negative (or normal) findings are not. This situation occurs frequently in imaging or other diagnostic studies. We propose here a kappa statistic that is suitable for free-response assessments.MethodWe derived the equivalent of Cohen’s kappa statistic for two raters under the assumption that the number of possible findings for any given … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
6
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 21 publications
(7 citation statements)
references
References 18 publications
1
6
0
Order By: Relevance
“…Using two different designs in the reliability study of CSDD, Barca et al (2010b) reported a higher kappa value (0.91) when both raters were present during the assessment and a lower kappa value (0.57) when the raters assessed the patients independently. Landis and Koch characterized kappa values between 0.41-0.60 as moderate, which is supported by other authors (Carpentier, Combescure, Merlini, & Perneger, 2017;Landis & Koch, 1977).…”
Section: Validity and Reliability Of The Raid-nsupporting
confidence: 77%
“…Using two different designs in the reliability study of CSDD, Barca et al (2010b) reported a higher kappa value (0.91) when both raters were present during the assessment and a lower kappa value (0.57) when the raters assessed the patients independently. Landis and Koch characterized kappa values between 0.41-0.60 as moderate, which is supported by other authors (Carpentier, Combescure, Merlini, & Perneger, 2017;Landis & Koch, 1977).…”
Section: Validity and Reliability Of The Raid-nsupporting
confidence: 77%
“…All formulas below are as presented Let m be the number of raters' and let k be the number of rating outcomes. Carpentier et al (2017) demonstrated the free response kappa in a computed form that the total numbers of discordant (b and c) and concordant positive (d) observations made in all patients, as 2d/(b + c+ cd). In 84 full body magnetic resonance imaging procedures in children that were evaluated by two independent raters', the free-response kappa statistics was 0.820.…”
Section: Discussionmentioning
confidence: 94%
“…Discussed the different measures of inter observer agreements and their desirable properties of 'k' at defined time interval 't'. While, appraising the use of '' a long list of literature is available relevant to the observation of paradoxes' for its long ranging interpretation on the basis of a few real practical illustrations and useful recommendations to overcome them appeared elsewhere (Carpentier et al, 2017;Cohen, 1960;1968;Feinstein and Cicchetti, 1990;Field and Welsh, 2007;Fleiss and Cohen, 1973). However, discussion on the estimation of '' and its interpretation through a more generalized approach is still to be attempted.…”
Section: Introductionmentioning
confidence: 99%
“…Repeated assessments on some infants occurred on different days. To calculate a confidence interval (CI) between 2.5 and 97.5, a bootstrap method was used, 34 where a random sample of 88 patients with replacement from the original data set was obtained, and all assessments from randomly selected patients were used. The κ statistic for the random sample was calculated, and the process was repeated 500 times.…”
Section: Discussionmentioning
confidence: 99%