Political communication has become one of the central arenas of innovation in the application of automated analysis approaches to ever-growing quantities of digitized texts. However, although researchers routinely and conveniently resort to certain forms of human coding to validate the results derived from automated procedures, in practice the actual "quality assurance" of such a "gold standard" often goes unchecked. Contemporary practices of validation via manual annotations are far from being acknowledged as best practices in the literature, and the reporting and interpretation of validation procedures differ greatly. We systematically assess the connection between the quality of human judgment in manual annotations and the relative performance evaluations of automated procedures against true standards by relying on large-scale Monte Carlo simulations. The results from the simulations confirm that there is a substantially greater risk of a researcher reaching an incorrect conclusion regarding the performance of automated procedures when the quality of manual annotations used for validation is not properly ensured. Our contribution should therefore be regarded as a call for the systematic application of high-quality manual validation materials in any political communication study, drawing on automated text analysis procedures.
During recent years, worries about fake news have been a salient aspect of mediated debates. However, the ubiquitous and fuzzy usage of the term in news reporting has led more and more scholars and other public actors to call for its abandonment in public discourse altogether. Given this status as a controversial but arguably effective buzzword in news coverage, we know surprisingly little about exactly how journalists use the term in their reporting. By means of a quantitative content analysis, this study offers empirical evidence on this question. Using the case of Austria, where discussions around fake news have been ubiquitous during recent years, we analyzed all news articles mentioning the term "fake news" in major daily newspapers between 2015 and 2018 (N = 2,967). We find that journalistic reporting on fake news shifts over time from mainly describing the threat of disinformation online, to a more normalized and broad usage of the term in relation to attacks on legacy news media. Furthermore, news reports increasingly use the term in contexts completely unrelated to disinformation or media attacks. In using the term this way, journalists arguably contribute not only to term salience but also to a questionable normalization process.
In communication research, topic modeling is primarily used for discovering systematic patterns in monolingual text corpora. To advance the usage, we provide an overview of recently presented strategies to extract topics from multilingual text collections for the purpose of comparative research. Moreover, we discuss, demonstrate, and facilitate the usability of the "Polylingual Topic Model" (PLTM) for such analyses. The appeal of this model is that it derives lists of related clustered words in different languages with little reliance on translation or multilingual dictionaries and without the need for manual post-hoc matching of topics. PLTM bridges the gap between languages by making use of document connections in training documents. As these training documents are the crucial resource for the model, we compare model evaluation metrics for different strategies to build training documents. By discussing the advantages and limitations of the different strategies in respect to different scenarios, our study contributes to the methodological discussion on automated content analysis of multilingual text corpora.
The policy of free movement—one of the core principles of the European Union—has become increasingly politicized. This makes it more important to understand how attitudes toward free movement are shaped, and the role of the media. The purpose of this study is therefore to investigate how news frames affect attitudes toward free movement, and whether education moderates framing effects. The findings from a survey experiment conducted in seven European countries show that the effects are few and inconsistent across countries. This suggest that these attitudes are not easily shifted by exposure to a single news frame.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.