Misinformation often has an ongoing effect on people’s memory and inferential reasoning even after clear corrections are provided; this is known as the continued influence effect. In pursuit of more effective corrections, one factor that has not yet been investigated systematically is the narrative versus non-narrative format of the correction. Some scholars have suggested that a narrative format facilitates comprehension and retention of complex information and may serve to overcome resistance to worldview-dissonant corrections. It is, therefore, a possibility that misinformation corrections are more effective if they are presented in a narrative format versus a non-narrative format. The present study tests this possibility. We designed corrections that are either narrative or non-narrative, while minimizing differences in informativeness. We compared narrative and non-narrative corrections in three preregistered experiments (total N = 2279). Experiment 1 targeted misinformation contained in fictional event reports; Experiment 2 used false claims commonly encountered in the real world; Experiment 3 used real-world false claims that are controversial, in order to test the notion that a narrative format may facilitate corrective updating primarily when it serves to reduce resistance to correction. In all experiments, we also manipulated test delay (immediate vs. 2 days), as any potential benefit of the narrative format may only arise in the short term (if the story format aids primarily with initial comprehension and updating of the relevant mental model) or after a delay (if the story format aids primarily with later correction retrieval). In all three experiments, it was found that narrative corrections are no more effective than non-narrative corrections. Therefore, while stories and anecdotes can be powerful, there is no fundamental benefit of using a narrative format when debunking misinformation.
Given that being misinformed can have negative ramifications, finding optimal corrective techniques has become a key focus of research. In recent years, several divergent correction formats have been proposed as superior based on distinct theoretical frameworks. However, these correction formats have not been compared in controlled settings, so the suggested superiority of each format remains speculative. Across four experiments, the current paper investigated how altering the format of corrections influences people’s subsequent reliance on misinformation. We examined whether myth-first, fact-first, fact-only, or myth-only correction formats were most effective, using a range of different materials and participant pools. Experiments 1 and 2 focused on climate change misconceptions; participants were Qualtrics online panel members and students taking part in a massive open online course, respectively. Experiments 3 and 4 used misconceptions from a diverse set of topics, with Amazon Mechanical Turk crowdworkers and university student participants. We found that the impact of a correction on beliefs and inferential reasoning was largely independent of the specific format used. The clearest evidence for any potential relative superiority emerged in Experiment 4, which found that the myth-first format was more effective at myth correction than the fact-first format after a delayed retention interval. However, in general it appeared that as long as the key ingredients of a correction were presented, format did not make a considerable difference. This suggests that simply providing corrective information, regardless of format, is far more important than how the correction is presented.
Reliance on misinformation often persists in the face of corrections. However, the role of social factors on people's reliance on corrected misinformation has received little attention. In two experiments, we investigated the extent to which social endorsement of misinformation and corrections influences belief updating. In both experiments misinformation and fact-checks were presented as social media posts, and social endorsement was manipulated via the number of "likes." In Experiment 1, social endorsement of the initial misinformation had a significant influence on belief; participants believed misinformation with high social endorsement more than misinformation with low endorsement. This effect was observed pre-factcheck and post-fact-check. High social endorsement of the fact-checks was associated with reduced misinformation belief; however, evidence for the persistence of this effect was mixed. These findings were replicated in Experiment 2. Our findings indicate that social endorsement can moderate our beliefs in
Given the potential negative impact reliance on misinformation can have, substantial effort has gone into understanding the factors that influence misinformation belief and propagation. However, despite the rise of social media often being cited as a fundamental driver of misinformation exposure and false beliefs, how people process misinformation on social media platforms has been under-investigated. This is partially due to a lack of adaptable and ecologically valid social media testing paradigms, resulting in an over-reliance on survey software and questionnaire-based measures. To provide researchers with a flexible tool to investigate the processing and sharing of misinformation on social media, this paper presents The Misinformation Game—an easily adaptable, open-source online testing platform that simulates key characteristics of social media. Researchers can customize posts (e.g., headlines, images), source information (e.g., handles, avatars, credibility), and engagement information (e.g., a post’s number of likes and dislikes). The platform allows a range of response options for participants (like, share, dislike, flag) and supports comments. The simulator can also present posts on individual pages or in a scrollable feed, and can provide customized dynamic feedback to participants via changes to their follower count and credibility score, based on how they interact with each post. Notably, no specific programming skills are required to create studies using the simulator. Here, we outline the key features of the simulator and provide a non-technical guide for use by researchers. We also present results from two validation studies. All the source code and instructions are freely available online at https://misinfogame.com.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.