Sentiment is central to many studies of communication science, from negativity and polarization in political communication to analyzing product reviews and social media comments in other sub-fields. This study provides an exhaustive comparison of sentiment analysis methods, using a validation set of Dutch economic headlines to compare the performance of manual annotation, crowd coding, numerous dictionaries and machine learning using both traditional and deep learning algorithms. The three main conclusions of this article are that: (1) The best performance is still attained with trained human or crowd coding; (2) None of the used dictionaries come close to acceptable levels of validity; and (3) machine learning, especially deep learning, substantially outperforms dictionary-based methods but falls short of human performance. From these findings, we stress the importance of always validating automatic text analysis methods before usage. Moreover, we provide a recommended step-bystep approach for (automated) text analysis projects to ensure both efficiency and validity.
At odds: laughing and thinking? The appreciation, processing, and persuasiveness of political satire Boukes, M.; Boomgaarden, H.; Moorman, M.; de Vreese, C.H.
General rightsIt is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons).
Disclaimer/Complaints regulationsIf you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: http://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible.
AbstractThis study constructs and experimentally tests an integrated framework of how political attitudes are affected by political satire. On the one hand, we show that political satire affects the attitude toward the satirized subject positively via perceived funniness. This was particularly strong among those who did not perceive the satire as potentially threatening, which follows disposition theory. On the other hand, young adults were found to be more absorbed into the satirical items, which decreased counterarguing, such that the attitude toward the satirized object was affected negatively. This is consistent with entertainmenteducation literature and theory about people's life stages. Investigating underlying and conditional processes thereby proved to be a valuable approach to detecting the mechanisms by which satire influences attitudes.
Despite the scholarly popularity of important developments of political communication, concepts like soft news or infotainment lack conceptual clarity. This article tackles that problem and introduces a multilevel framework model of softening of journalistic political communication, which shows that the 4 most prominent concepts—(a) sensationalism, (b) hard and soft news (HSN), (c) infotainment, and (d) tabloidization, and, additionally, (e) eroding of boundaries of journalism—can be distinguished in a hierarchical model. By softening, we understand a metaconcept representing developments in political journalism that are observed on different levels of investigation, from journalism as a system (macrolevel) down to single media items (microlevel).
This study investigates how the use of Twitter and Facebook affects citizens' knowledge acquisition, and whether this effect is conditional upon people's political interest. Using a panel survey design with repeated measures of knowledge acquisition, this study is able to disentangle causality and to demonstrate that more frequent usage of Twitter positively affects the acquisition of current affairs knowledge. The opposite is found for Facebook: More frequent Facebook usage causes a decline in knowledge acquisition. This negative effect of Facebook usage occurred particularly for citizens with less political interest, thereby, amplifying the existing knowledge gap between politically interested and uninterested citizens.
This article scrutinizes the method of automated content analysis to measure the tone of news coverage. We compare a range of off-the-shelf sentiment analysis tools to manually coded economic news as well as examine the agreement between these dictionary approaches themselves. We assess the performance of five off-the-shelf sentiment analysis tools and two tailor-made dictionary-based approaches. The analyses result in five conclusions. First, there is little overlap between the off-the-shelf tools; causing wide divergence in terms of tone measurement. Second, there is no stronger overlap with manual coding for short texts (i.e., headlines) than for long texts (i.e., full articles). Third, an approach that combines individual dictionaries achieves a comparably good performance. Fourth, precision may increase to acceptable levels at higher levels of granularity. Fifth, performance of dictionary approaches depends more on the number of relevant keywords in the dictionary than on the number of valenced words as such; a small tailor-made lexicon was not inferior to large established dictionaries. Altogether, we conclude that off-the-shelf sentiment analysis tools are mostly unreliable and unsuitable for research purposesat least in the context of Dutch economic newsand manual validation for the specific language, domain, and genre of the research project at hand is always warranted.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.