Could the media’s attention to misinformation and fake news be harmful? We examine whether the coverage of misinformation in the media, alongside untrustworthy content and partisan sites, contributes to rising erroneous beliefs and decreasing levels of trust among U.S. citizens. We test this using experimental (Study 1) and observational data (Study 2). Study 1 finds that both exposure to actual misinformation and to the coverage of misinformation have short-term but no long-term consequences for misperceptions. Study 2 shows that behaviorally tracked visits to untrustworthy sites and exposure to content covering misinformation —although relatively rare — both predict lower trust, and that visits to liberal news sites boost trust in scientists. Although the direct impact of untrustworthy sites and the coverage of misinformation on misperceptions is short-lived, it may matter more for other outcomes such as trust, which are crucial to the functioning of democracies.
Fact-checking remains one of the most prominent ways to counter misinformation. Increasingly so, however, fact-checkers are challenged to match the speed of (mis)information production and increasingly focus on false information at the expense of true information. Could Large Language Models (LLMs) like ChatGPT help enhance the efficiency and expediency of fact-checking processes? We conducted a systematic analysis to measure ChatGPT’s fact-checking performance by submitting 12,784 fact-checked statements to ChatGPT as a zero-shot classification. We found that ChatGPT accurately categorized statements in 72% of cases, with significantly greater accuracy at identifying true claims (80%) than false claims (67%). Overall, ChatGPT did better on statements that have been more recently fact-checked (even after its training date). These findings demonstrate the potential of ChatGPT to help label misinformation. However, LLMs like ChatGPT should not replace the crucial work of human fact-checking experts in upholding the accuracy of information dissemination.
What consequences should political parties expect when they invoke sudden policy U-turns? We establish a synergy between the causes of policy changes and their consequences and argue that voter evaluations of policy shifts will be influenced by their perceptions of why these shifts occurred in the first place. Building on mental models, a notion we borrow from cognitive psychology, we expect that voters will start from their perceptions of whether party change happened on principled grounds or for electoral gains (the premises) and make probabilistic predictions about its level of commitment in the future (the inference). We suggest that, while U-turns, in general, can be damaging to a party's reputation, principled changes brought about by new scientific evidence or major crises should not necessarily have negative implications, because these changes can give the party new grounds of credibility. We test our expectations via a pre-registered randomized survey experiment in Germany (n = 3127) featuring two classes of party change: strategic or principled shifts. We find that voters generally punish political parties when they reverse course regardless of the reason, thus including when external circumstances suggest change may be necessary. Coming from the premise that political and societal change is imperative, these findings have direct implications for democracies.
Current interventions to combat misinformation, including fact-checking, media literacy tips, and media coverage of misinformation, may have unintended democratic consequences. We propose that these interventions may increase skepticism toward all information, including accurate information. Across three online survey experiments in three diverse countries (the US, Poland, and Hong Kong, total N = 6127), we test the unintended consequences of existing strategies and compare them with three alternative interventions against misinformation. We examine how exposure to fact-checking, media literacy tips, and media coverage of misinformation affects individuals' perception of both factual and false information, as well as their trust in key democratic institutions. Our results show that while all interventions successfully reduce belief in false information, they also negatively impact the credibility of factual information. This highlights the need for further improved strategies that minimize the harms and maximize the benefits of interventions against misinformation.
Social media companies are introducing new features for users to monetize engagements, derived from blockchain-based decentralized social media. These steps are potentially worrisome. The monetization of engagements might create incentives to post objectionable content. However, it is unclear to what extent such negative outcomes are likely to occur. To address this question, we first administered a survey that showed that many users have a poor understanding of the mechanisms behind monetization. Second, we conducted a survey experiment to examine the effects of hypothetical monetary incentives. We find that a simple nudge about the possibility of earning money for user engagements increases the willingness to share different kinds of news, including misinformation. The presence of penalties for objectionable posts diminishes the positive effect of monetary rewards on misinformation sharing, but it does not eliminate it. These results have policy implications for content moderation practices if platforms embrace decentralization and monetization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.