2015
DOI: 10.1080/21670811.2015.1096598
|View full text |Cite
|
Sign up to set email alerts
|

Taking Stock of the Toolkit

Abstract: When analyzing digital journalism content, journalism scholars are confronted with a number of substantial differences compared to traditional journalistic content. The sheer amount of data and the unique features of digital content call for the application of valuable new techniques. Various other scholarly fields are already applying computational methods to study digital journalism data. Often, their research interests are closely related to those of journalism scholars. Despite the advantages that computat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
57
0
3

Year Published

2017
2017
2022
2022

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 160 publications
(60 citation statements)
references
References 58 publications
0
57
0
3
Order By: Relevance
“…Methodological innovation in content analysis procedures rather focuses on automated, computerassisted techniques (e.g., Boumans & Trilling, 2016;Grimmer & Stewart, 2013;Jacobi, van Atteveldt, & Welbers, 2016). While traditional, manual content analysis is described as a costly and labor intensive methodology (Krippendorff, 2013), the precision and validity with which computers implement content analysis in its various forms is still in development and often depending on the input of human coders in the form of training data for dictionary or algorithm development (e.g.…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…Methodological innovation in content analysis procedures rather focuses on automated, computerassisted techniques (e.g., Boumans & Trilling, 2016;Grimmer & Stewart, 2013;Jacobi, van Atteveldt, & Welbers, 2016). While traditional, manual content analysis is described as a costly and labor intensive methodology (Krippendorff, 2013), the precision and validity with which computers implement content analysis in its various forms is still in development and often depending on the input of human coders in the form of training data for dictionary or algorithm development (e.g.…”
mentioning
confidence: 99%
“…Haselmayer & Jenny, 2014) or interpretative coding, that we take into account here. Due to the digital availability of various sorts of texts and increasing computer power to analyze large volumes of texts, great advances have been made in terms of automated, computer-assisted content analysis (e.g., Boumans & Trilling, 2016;Krippendorff, 2013). A common adoption relates to content that clearly manifests itself in the use of pre-defined words and phrases (i.e., dictionaries) (e.g., Pang & Lee, 2008;Riloff & Wiebe, 2003).…”
mentioning
confidence: 99%
“…It should thus be feasible to measure the content of television program on a much larger scale. Automated content analysis of textual data is becoming increasingly common in communication science (e.g., Boumans and Trilling, 2016), but the analysis of television content remains difficult and labor intensive. Applying automated content analysis to "community annotation" data from second-screen users seems to be a promising approach to close this gap.…”
Section: Discussionmentioning
confidence: 99%
“…Although the term "automated content analysis" in general encompasses a wide variety of forms (e.g., Grimmer & Stewart, 2013;Hopkins & King, 2010;Krippendorff, 2013), our definition inevitably excludes automatic approaches of merely acquiring data, data entry, or data management other than the actual coding or classification process (e.g., Lewis, Zamith, & Hermida, 2013). Instead, we concentrate on two broad and rather common formsa dictionary (lexicon-based) approach and a supervised machine learning (SML) approach (see Boumans & Trilling, 2016;Grimmer & Stewart, 2013). As we shall elaborate below, they are rather sensitive to the issue of imperfect gold standards, although the two approaches may nontrivially differ in terms of the degree of their potential sensitivity to this issue.…”
Section: The Use Of Human Annotation In Automated Content Analysismentioning
confidence: 99%