1979
DOI: 10.2307/2347199
|View full text |Cite
|
Sign up to set email alerts
|

A Study of Coder Variability

Abstract: Summary Survey questionnaires often contain open‐ended questions, for which interviewers are required to record respondents' replies verbatim. Errors can arise in coding these replies in preparation for statistical analysis. The paper reports the results of an experiment examining the levels of reliability attained by six professional coders in making judgemental codings of a sample of responses to six survey questions. A sizeable degree of unreliability was found, especially with the use of general and “catch… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0
1

Year Published

1983
1983
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 17 publications
0
3
0
1
Order By: Relevance
“…There are many measures for examining agreement across raters, differing in whether the measure accounts for agreements occurring by chance, whether the ratings are assumed to be nominal or ordinal, and whether items and raters are assumed to be fixed or randomly selected (Schrout and Fleiss 1979;Banerjee et al 1999). As such, three measures were used to indicate the degree of agreement across the reviewers in their ratings-the proportion of all two-way combinations of reviewers who provide exactly the same rating across all of the questions (the match rate), a multiple coder kappa (Kalton and Stowell 1979), and an intraclass correlation coefficient (ICC). The match rate is intuitive but does not account for chance agreement.…”
Section: Discussionmentioning
confidence: 99%
“…There are many measures for examining agreement across raters, differing in whether the measure accounts for agreements occurring by chance, whether the ratings are assumed to be nominal or ordinal, and whether items and raters are assumed to be fixed or randomly selected (Schrout and Fleiss 1979;Banerjee et al 1999). As such, three measures were used to indicate the degree of agreement across the reviewers in their ratings-the proportion of all two-way combinations of reviewers who provide exactly the same rating across all of the questions (the match rate), a multiple coder kappa (Kalton and Stowell 1979), and an intraclass correlation coefficient (ICC). The match rate is intuitive but does not account for chance agreement.…”
Section: Discussionmentioning
confidence: 99%
“…We each coded the responses independently and then reconciled all differences. Computing interrater reliability with multiple coders and nonmutually exclusive codes was made possible by amending the formula provided by Kalton and Stowell (1979). For objection, we had 73 percent agreement and a kappa score of .65.…”
Section: Note: All Examples In This Table Are Drawn From Responses To Vignettes In Which the Dependent Variable Was Problematicmentioning
confidence: 99%
“…All codes and responses were checked for accuracy by members of the research team and subsequently given a numeric value to aid quantitative analysis. Intra-coder reliability was employed in this study to enhance the coding process of qualitative data ( 31 ). A subset of the data was coded at different time points, allowing for the assessment of agreement between the coder’s coding decisions.…”
Section: Methodsmentioning
confidence: 99%