Bayesian statistics offers a normative description for how a person should combine their original beliefs (i.e., their priors) in light of new evidence (i.e., the likelihood). Previous research suggests that people tend to under-weight both their prior (base rate neglect) and the likelihood (conservatism), although this varies by individual and situation. Yet this work generally elicits people’s knowledge as single point estimates (e.g., x has a 5% probability of occurring) rather than as a full distribution. Here we demonstrate the utility of eliciting and fitting full distributions when studying these questions. Across three experiments, we found substantial variation in the extent to which people showed base rate neglect and conservatism, which our method allowed us to measure for the first time simultaneously at the level of the individual. While most people tended to disregard the base rate, they did so less when the prior was made explicit. Although many individuals were conservative, there was no apparent systematic relationship between base rate neglect and conservatism within each individual. We suggest that this method shows great potential for studying human probabilistic reasoning.
There is increasing pressure on social media companies to reduce the spread of misinformation on their platforms. However, they would prefer not to be the arbiters of truth as the truth can be subjective or otherwise hard to determine. Instead, they would prefer that social media users themselves show better discernment when deciding which information to share. Here we show that allowing people to share only those social media posts that they have indicated are true significantly improves sharing discernment, as measured by the difference in the probability of sharing true information versus the probability of sharing false information. Because it doesn’t require social media companies to be the arbiters of truth, this self-censorship intervention can be employed in situations where social media companies suspect that individuals are propagating misinformation but are not sufficiently confident in their suspicions to directly censor the individuals involved. As such, self-censorship can usefully supplement externally imposed (i.e. traditional) censorship in reducing the propagation of false information on social media platforms.
Despite robust evidence that misinformation continues to influence event-related reasoning after a clear retraction, evidence for the continued influence of misinformation on person impressions is mixed. Across three experiments, we investigated the impact of person-related misinformation and its correction on dynamic (moment-to-moment) impression formation. Participants iteratively formed an impression of a protagonist, “John”, based on a series of behaviour descriptions, including misinformation that was later retracted. Person impressions were recorded after each behaviour description. As predicted, we found a strong effect of information valence on person impressions: negative misinformation had a greater impact on person impressions than positive misinformation (Experiments 1 and 2). Furthermore, in each experiment participants fully discounted the misinformation once retracted, regardless of whether the misinformation was positive or negative. This was true even when the other behaviour descriptions were congruent with (Experiment 2) or causally related to (Experiment 3) the retracted misinformation. Thus, we found no evidence for the continued influence of retracted misinformation on person impressions. Our findings help to address some of the discrepant findings in the literature, suggesting that following a clear retraction, person-related misinformation can be effectively discounted, at least for the scenarios considered in this study.
The proliferation of misinformation on social media platforms has given rise to growing demands for effective intervention strategies. One suggested method is to encourage users to deliberate on the veracity of the information prior to sharing. However, this strategy is undermined by individuals' propensity to share posts they acknowledge as false. In our study, we demonstrate that the practice of self-certification significantly enhances discernment in sharing. We observe that compelling users to verify their belief in a news post's truthfulness before sharing it markedly curtails the dissemination of false information. Importantly, this approach doesn't hinder users from sharing content they genuinely believe to be true. Thus, we propose a method that substantially curbs the spread of misleading content on social media without infringing upon the principle of free speech.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.