2002
DOI: 10.1111/j.1468-2958.2002.tb00826.x
|View full text |Cite
|
Sign up to set email alerts
|

Content Analysis in Mass Communication: Assessment and Reporting of Intercoder Reliability

Abstract: As a method specifically intended for the study of messages, content analysis is fundamental to mass communication research. Intercoder reliability, more specifically termed intercoder agreement, is a measure of the extent to which independent judges make the same coding decisions in evaluating the characteristics of messages, and is at the heart of this method. Yet there are few standard and accessible guidelines available regarding the appropriate procedures to use to assess and report intercoder reliability… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

16
1,407
3
35

Year Published

2004
2004
2018
2018

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 2,238 publications
(1,461 citation statements)
references
References 29 publications
16
1,407
3
35
Order By: Relevance
“…Manual coding content analysis was chosen to observe the change of published information about water management issues over time as we believe that important elements of arguments are often implied rather than explicit in statements, and human coders are more alert to those latent content beneath the manifest content (Lombard et al 2002;Howland et al 2006). This approach enables data patterns to be determined from unstructured information, and it provides a powerful tool for tracking changes in media coverage (Higuchi 2004;Kirilenko et al 2012).…”
Section: Methodsmentioning
confidence: 99%
“…Manual coding content analysis was chosen to observe the change of published information about water management issues over time as we believe that important elements of arguments are often implied rather than explicit in statements, and human coders are more alert to those latent content beneath the manifest content (Lombard et al 2002;Howland et al 2006). This approach enables data patterns to be determined from unstructured information, and it provides a powerful tool for tracking changes in media coverage (Higuchi 2004;Kirilenko et al 2012).…”
Section: Methodsmentioning
confidence: 99%
“…Calculations for all ICCs followed Shrout and Fleiss (13) Model 2 and used two-way random effects ANOVA with absolute agreement. ICCs above .90 are very good, above .80 are acceptable, and above .70 are adequate for exploratory research (14). …”
Section: Methodsmentioning
confidence: 99%
“…As already mentioned, Lombard et al (2002) applied the following criterion for accepting content analysis findings as sufficiently reliable:   .70, otherwise %-agreement  .90 (p. 596). They take  as a conservative index and %-agreement as a liberal one, presumably convinced that the truth lies somewhere between these two.…”
Section: Consequences Of Lombard Et Al"s Reliability Standardsmentioning
confidence: 99%
“…Amplifying Neuendorf (2002, p. 163) who merely quotes a concern expressed elsewhere about the appropriateness of using different coders for coding different but overlapping sets of units, Lombard et al (2002) make it a point of recommending against this attractive possibility (p. 602) -without justification, however. I can imagine three: (a) Potter and Levine-Donnerstein (1999) argue that the overlap needs to be large enough, which is correct.…”
Section: Multiple Coders Multiple Coding Sets Multiple Metricsmentioning
confidence: 99%
See 1 more Smart Citation