2014
DOI: 10.7717/peerj.651
|View full text |Cite
|
Sign up to set email alerts
|

Approaches to describing inter-rater reliability of the overall clinical appearance of febrile infants and toddlers in the emergency department

Abstract: Objectives. To measure inter-rater agreement of overall clinical appearance of febrile children aged less than 24 months and to compare methods for doing so.Study Design and Setting. We performed an observational study of inter-rater reliability of the assessment of febrile children in a county hospital emergency department serving a mixed urban and rural population. Two emergency medicine healthcare providers independently evaluated the overall clinical appearance of children less than 24 months of age who ha… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
25
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 26 publications
(26 citation statements)
references
References 39 publications
1
25
0
Order By: Relevance
“…12 Gwet's AC 1 was chosen due to its superior performance compared to Light's kappa in the setting of high inter-rater agreement. 13,14 Excellent agreement was defined as an AC 1 > 0.80 and good agreement an AC 1 > 0.60. As the distributions of these scores were not normal, they were compared between the pre-and postintervention groups using the Wilcoxon rank-sum test.…”
Section: Discussionmentioning
confidence: 99%
“…12 Gwet's AC 1 was chosen due to its superior performance compared to Light's kappa in the setting of high inter-rater agreement. 13,14 Excellent agreement was defined as an AC 1 > 0.80 and good agreement an AC 1 > 0.60. As the distributions of these scores were not normal, they were compared between the pre-and postintervention groups using the Wilcoxon rank-sum test.…”
Section: Discussionmentioning
confidence: 99%
“…Gwet's AC1 scores were chosen over Cohen's kappa as they are less affected by prevalence and marginal probability. 11,12 An agreement percentage was used for the nominal variables. The null hypothesis of each P-value test is that the Gwet's AC1 score is equal to zero.…”
Section: Discussionmentioning
confidence: 99%
“…We also assessed the different classification systems for interrater agreement. In this study, we chose Gwet's Agreement Coefficient 2 (AC2) to assess interrater agreement, which offers a statistic that: (1) can be used with more than 2 raters; (2) can be used for ordinal variables; (3) is able to handle missing data; and (4) has been previously used and recommended in previous studies . The interrater agreement coefficient of classification system A (0.52) falls between that of B (0.42) and C (0.79).…”
Section: Discussionmentioning
confidence: 99%