Delusion-prone individuals may be more likely to accept even delusion-irrelevant implausible ideas because of their tendency to engage in less analytic and actively open-minded thinking. Consistent with this suggestion, two online studies with over 900 participants demonstrated that although delusion-prone individuals were no more likely to believe true news headlines, they displayed an increased belief in "fake news" headlines, which often feature implausible content. Mediation analyses suggest that analytic cognitive style may partially explain these individuals' increased willingness to believe fake news. Exploratory analyses showed that dogmatic individuals and religious fundamentalists were also more likely to believe false (but not true) news, and that these relationships may be fully explained by analytic cognitive style. Our findings suggest that existing interventions that increase analytic and actively open-minded thinking might be leveraged to help reduce belief in fake news.
Please scroll down for article-it is on subsequent pagesWith 12,500 members from nearly 90 countries, INFORMS is the largest international association of operations research (O.R.) and analytics professionals and students. INFORMS provides unique networking and learning opportunities for individual professionals, and organizations of all types and sizes, to better understand and use O.R. and analytics tools and methods to transform strategic visions and achieve better outcomes. For more information on INFORMS, its publications, membership, or meetings visit http://www.informs.org MANAGEMENT SCIENCEAbstract. What can be done to combat political misinformation? One prominent intervention involves attaching warnings to headlines of news stories that have been disputed by third-party fact-checkers. Here we demonstrate a hitherto unappreciated potential consequence of such a warning: an implied truth effect, whereby false headlines that fail to get tagged are considered validated and thus are seen as more accurate. With a formal model, we demonstrate that Bayesian belief updating can lead to such an implied truth effect. In Study 1 (n = 5,271 MTurkers), we find that although warnings do lead to a modest reduction in perceived accuracy of false headlines relative to a control condition (particularly for politically concordant headlines), we also observed the hypothesized implied truth effect: the presence of warnings caused untagged headlines to be seen as more accurate than in the control. In Study 2 (n = 1,568 MTurkers), we find the same effects in the context of decisions about which headlines to consider sharing on social media. We also find that attaching verifications to some true headlines-which removes the ambiguity about whether untagged headlines have not been checked or have been verifiedeliminates, and in fact slightly reverses, the implied truth effect. Together these results contest theories of motivated reasoning while identifying a potential challenge for the policy of using warning tags to fight misinformation-a challenge that is particularly concerning given that it is much easier to produce misinformation than it is to debunk it.
Humans often cooperate with strangers, despite the costs involved. A long tradition of theoretical modeling has sought ultimate evolutionary explanations for this seemingly altruistic behavior. More recently, an entirely separate body of experimental work has begun to investigate cooperation's proximate cognitive underpinnings using a dualprocess framework: Is deliberative self-control necessary to reign in selfish impulses, or does self-interested deliberation restrain an intuitive desire to cooperate? Integrating these ultimate and proximate approaches, we introduce dual-process cognition into a formal gametheoretic model of the evolution of cooperation. Agents play prisoner's dilemma games, some of which are one-shot and others of which involve reciprocity. They can either respond by using a generalized intuition, which is not sensitive to whether the game is oneshot or reciprocal, or pay a (stochastically varying) cost to deliberate and tailor their strategy to the type of game they are facing. We find that, depending on the level of reciprocity and assortment, selection favors one of two strategies: intuitive defectors who never deliberate, or dual-process agents who intuitively cooperate but sometimes use deliberation to defect in one-shot games. Critically, selection never favors agents who use deliberation to override selfish impulses: Deliberation only serves to undermine cooperation with strangers. Thus, by introducing a formal theoretical framework for exploring cooperation through a dual-process lens, we provide a clear answer regarding the role of deliberation in cooperation based on evolutionary modeling, help to organize a growing body of sometimesconflicting empirical results, and shed light on the nature of human cognition and social decision making.dual process | cooperation | evolutionary game theory | prisoner's dilemma | heuristics C ooperation, where people pay costs to benefit others, is a defining feature of human social interaction. However, our willingness to cooperate is puzzling because of the individual costs that cooperation entails. Explaining how the "selfish" process of evolution could have given rise to seemingly altruistic cooperation has been a major focus of research across the natural and social sciences for decades. Using the tools of evolutionary game theory, great progress has been made in identifying mechanisms by which selection can favor cooperative strategies, providing ultimate explanations for the widespread cooperation observed in human societies (1).In recent years, the proximate cognitive mechanisms underpinning human cooperation have also begun to receive widespread attention. For example, a wide range of experimental evidence suggests that emotion and intuition play a key role in motivating cooperation (2-5). The dual-process perspective on decision making (6-8) offers a powerful framework for integrating these observations. In the dual-process framework, decisions are conceptualized as arising from competition between two types of cognitive processes: (i) auto...
a b s t r a c tPeople's beliefs about normality play an important role in many aspects of cognition and life (e.g., causal cognition, linguistic semantics, cooperative behavior). But how do people determine what sorts of things are normal in the first place? Past research has studied both people's representations of statistical norms (e.g., the average) and their representations of prescriptive norms (e.g., the ideal). Four studies suggest that people's notion of normality incorporates both of these types of norms. In particular, people's representations of what is normal were found to be influenced both by what they believed to be descriptively average and by what they believed to be prescriptively ideal. This is shown across three domains: people's use of the word ''normal" (Study 1), their use of gradable adjectives (Study 2), and their judgments of concept prototypicality (Study 3). A final study investigated the learning of normality for a novel category, showing that people actively combine statistical and prescriptive information they have learned into an undifferentiated notion of what is normal (Study 4). Taken together, these findings may help to explain how moral norms impact the acquisition of normality and, conversely, how normality impacts the acquisition of moral norms.Published by Elsevier B.V.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.