1974
DOI: 10.1177/004912417400300204
|View full text |Cite
|
Sign up to set email alerts
|

Sources of Error in the Coding of Questionnaire Data

Abstract: Errors made in the coding of a set of 1,209 relatively structured questionnaires were examined in an effort to discover characteristics of the data set and of the coding instructions which contributed to the occurrence of error. Items included in the questionnaire were ordered with respect to the complexity of the coding task involved. The number of errors per decision was found to vary directly with the level of complexity of the task. Closer examination of the types of decisions involved, including the compu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
13
0

Year Published

1979
1979
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 32 publications
(13 citation statements)
references
References 9 publications
0
13
0
Order By: Relevance
“…There is considerable debate about whether clearly demarcated parts of the text, such as a sentence or paragraph, rather than ''units of meaning'' as defined by the coder are the appropriate units of analysis (Garrison et al 2006;Morrissey 1974). On one hand, the concern with using predefined blocks of text is that it may not accurately reflect the meaning as intended by the respondent.…”
Section: Solving the Unitization Problem With Units Of Meaningmentioning
confidence: 99%
See 1 more Smart Citation
“…There is considerable debate about whether clearly demarcated parts of the text, such as a sentence or paragraph, rather than ''units of meaning'' as defined by the coder are the appropriate units of analysis (Garrison et al 2006;Morrissey 1974). On one hand, the concern with using predefined blocks of text is that it may not accurately reflect the meaning as intended by the respondent.…”
Section: Solving the Unitization Problem With Units Of Meaningmentioning
confidence: 99%
“…It is not ordinarily recommended that intercoder reliability be calculated simply as the percentage of agreement among coders-the so-called proportion agreement method (Morrissey 1974)-because this does not take into consideration the possibility that coders might agree occasionally by chance (Bernard 2000:459-61). Chance may inflate agreement percentages, especially with only two coders and when they have only a few codes (Grayson and Rust 2001).…”
Section: Calculating Reliability and Agreementmentioning
confidence: 99%
“…inter-coder reliability was obtained, no modifications were needed for consistency (Miles & Huberman, 1994) since, as Krippendorff (2004) and Morrissey (1974) stated, two or more separate coders are needed for more than 90% of inter-coder agreement.…”
Section: Data Analysis Proceduresmentioning
confidence: 99%
“…Although such errors may be studied fairly easily, they have received little attention in the literature. Yet the few studies that have been reported have shown that when complex or judgemental codings are involved coding errors can seriously impair the quality of the resulting data (Woodward and Franzen, 1948;Durbin and Stuart, 1954;Sussman and Haug, 1967;Crittenden and Hill, 1971;Kammeyer and Roth, 1971; U.S. Bureau of the Census, 1972, 1974Duncan and Evers, 1975). In view of this situation it was decided to carry out a small-scale experiment, similar in design to that of Durbin and Stuart (1954), to examine the coding reliability achieved by professional survey organization coders in making judgemental codings of answers to open questions asked on a social survey questionnaire.…”
mentioning
confidence: 99%