Participatory formats in online journalism offer increased options for user comments to reach a mass audience, also enabling the spreading of incivility. As a result, journalists feel the need to moderate offensive user comments in order to prevent the derailment of discussion threads. However, little is known about the principles on which forum moderation is based. The current study aims to fill this void by examining 673,361 user comments (including all incoming and rejected comments) of the largest newspaper forum in Germany (Spiegel Online) in terms of the moderation decision, the topic addressed, and the use of insulting language using automated content analysis. The analyses revealed that the deletion of user comments is a frequently used moderation strategy. Overall, more than one-third of comments studied were rejected. Further, users mostly engaged with political topics. The usage of swear words was not a reason to block a comment, except when offenses were used in connection with politically sensitive topics. We discuss the results in light of the necessity for journalists to establish consistent and transparent moderation strategies.
“Fake news” is an expression that became popularized during the 2016 U.S. election. The rather inflationary use of the term since then has changed its meaning, making it an ambiguous and difficult concept. The term is currently used both as (a) a derogatory term denouncing media and journalism; and (b) an umbrella term for various forms of wrong, misguided, or fabricated information. Fake news in the latter sense is part of a larger spectrum ranging from unintentional misinformation (e.g., sloppy reporting) to intentional disinformation (e.g., propaganda). Examples include various discrepancies from factuality along several dimensions and aspects of the transmitted information, ranging from actual content features (including text, images, etc.) to meta‐information (e.g., headlines, author information) to more contextual aspects (e.g., framing) whereby combinations are possible. As fake news is not a well‐defined concept, and the current definition approaches are not unidimensional, different research groups strive for more precise terms to describe the phenomenon.
The COVID-19 pandemic has not only had severe political, economic, and societal effects, it has also affected media and communication systems in unprecedented ways. While traditional journalistic media has tried to adapt to the rapidly evolving situation, alternative news media on the Internet have given the events their own ideological spin. Such voices have been criticized for furthering societal confusion and spreading potentially dangerous "fake news" or conspiracy theories via social media and other online channels. The current study analyzes the factual basis of such fears in an initial computational content analysis of alternative news media's output on Facebook during the early Corona crisis, based on a large German data set from January to the second half of March 2020. Using computational content analysis methods, reach, interactions, actors, and topics of the messages were examined, as well as the use of fabricated news and conspiracy theories. The analysis revealed that the alternative news media stay true to message patterns and ideological foundations identified in prior research. While they do not spread obvious lies, they are predominantly sharing overly critical, even anti-systemic messages, opposing the view of the mainstream news media and the political establishment. With this pandemic populism, they contribute to a contradictory, menacing, and distrusting worldview, as portrayed in detail in this analysis.
Online media offer unprecedented access to digital public spheres, largely enhancing users’ opportunities for participation and providing new means for strengthening democratic discourse. At the same time, the last decades have demonstrated that online discourses are often characterised by so-called ‘dark participation’ the spreading of lies and incivility. Using ‘problematic behaviour theory’ as framework and focusing on incivility as a specific form of dark participation, this article investigates the role of users’ personal characteristics, media use, and online experiences in relation to offensive and hateful online behaviour. Using a random-quota survey of the German population, we explored how dark personality traits, political attitudes and emotions, the frequency and spaces of online-media use, and users’ experiences with both civil and uncivil online discourses predicted participants own uncivil behaviour, such as posting, sharing, or liking uncivil content. We found that 46% of the participants who had witnessed incivility in the last three months also engaged in uncivil participation. A hierarchical logistic regression analysis showed that incivility was associated with manipulative personality traits as measured by the dark triad, right-wing populist voting intentions, and frequent social-media use. Experiences with both civil comments and hate speech predicted higher levels of uncivil participation. The strongest predictor was participants’ personal experiences with online victimisation. Overall, the results confirmed that dark participation in the sense of uncivil engagement results from the interplay of personality traits, an online environment that allows for deviant engagement, and, most importantly, participants’ experiences in said environment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.