2017
DOI: 10.1002/bimj.201600093
|View full text |Cite
|
Sign up to set email alerts
|

Comparing dependent kappa coefficients obtained on multilevel data

Abstract: Reliability and agreement are two notions of paramount importance in medical and behavioral sciences. They provide information about the quality of the measurements. When the scale is categorical, reliability and agreement can be quantified through different kappa coefficients. The present paper provides two simple alternatives to more advanced modeling techniques, which are not always adequate in case of a very limited number of subjects, when comparing several dependent kappa coefficients obtained on multile… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
40
0
1

Year Published

2017
2017
2024
2024

Publication Types

Select...
8

Relationship

2
6

Authors

Journals

citations
Cited by 46 publications
(41 citation statements)
references
References 37 publications
0
40
0
1
Order By: Relevance
“…To account for the correlation of the clustered data (ie, multiple measurement within each patient) the method proposed by Obuchowski was applied (26). Interrater reliability was calculated with the Fleiss multirater k and compared by using the Hotelling T 2 test (27). Two-sided P values of less than .05 were considered to indicate statistically significant differences.…”
Section: Resultsmentioning
confidence: 99%
“…To account for the correlation of the clustered data (ie, multiple measurement within each patient) the method proposed by Obuchowski was applied (26). Interrater reliability was calculated with the Fleiss multirater k and compared by using the Hotelling T 2 test (27). Two-sided P values of less than .05 were considered to indicate statistically significant differences.…”
Section: Resultsmentioning
confidence: 99%
“…We calculated the difference between these two coefficients and summarized it in a graph. We also calculated p values to explore for statistically significant differences using an adaption of Hotelling's T 2 test described by Vanbelle, S 14 . Since we were testing the same hypothesis many times we used Holm's correction procedure to adjust the obtained p values for multiple hypothesis testing.…”
Section: Resultsmentioning
confidence: 99%
“…We calculated the probability of agreement and multirater Conger’s kappa using the delta method for the analysis of multilevel data. 20 Conger’s kappa coefficient was chosen over Fleiss’ kappa because the observers classifying the sounds were the same for all sounds. We analysed the intragroup agreement in each of the seven groups of observers when classifying the recordings for the presence of wheezes and crackles disregarding the breathing phase.…”
Section: Methodsmentioning
confidence: 99%
“…We used the statistical software ‘R’ V.3.2.1 together with the package ‘multiagree’ for the statistical analysis of kappa statistics. 21 …”
Section: Methodsmentioning
confidence: 99%