Rethinking Research Methods in an Age of Digital Journalism 2018
DOI: 10.4324/9781315115047-7
|View full text |Cite
|
Sign up to set email alerts
|

Quantitative analysis of large amounts of journalistic texts using topic modelling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
56
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 50 publications
(58 citation statements)
references
References 0 publications
1
56
1
Order By: Relevance
“…9 The lower the perplexity, the better the prediction. However, the best fit does not have to accord with the optimal interpretability of the topics (Chang et al, 2009;Jacobi et al, 2016). 10 We also equate plural and singular nouns.…”
Section: The Leiden Manifestomentioning
confidence: 99%
“…9 The lower the perplexity, the better the prediction. However, the best fit does not have to accord with the optimal interpretability of the topics (Chang et al, 2009;Jacobi et al, 2016). 10 We also equate plural and singular nouns.…”
Section: The Leiden Manifestomentioning
confidence: 99%
“…It is an alternative to the dictionary-based analysis, which is the most popular automated analysis approach [19], and allows to work with a corpus without a prior knowledge, letting the topics emerge from the data. In the same spirit of our work, many authors, as for instance [20,21,22], emphasized the advantage of using automated text classification in social science research. For example, topic modeling have been used in discourse analysis [23], analysis of social media discussions [24,25,26], or recognition of entities in news articles [27].…”
Section: Introductionmentioning
confidence: 82%
“…This factoranalytic approach is comparable to topic modeling that uses word distributions to detect topics, assigning words belonging to specific topics, and the co-occurrences of the words in topics, especially those using Latent Dirichlet Allocation (LDA), which assigns words into clusters using probability distributions (Blei, Ng, & Jordan, 2003). This method has been applied to the analysis of large sets of documents (for example, Jacobi, van Atteveldt, & Welbers, 2016).…”
Section: Theoretical Framework: Network Approachmentioning
confidence: 99%